Jan 26 09:00:48 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 26 09:00:48 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 26 09:00:48 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 09:00:48 localhost kernel: BIOS-provided physical RAM map:
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 26 09:00:48 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 26 09:00:48 localhost kernel: NX (Execute Disable) protection: active
Jan 26 09:00:48 localhost kernel: APIC: Static calls initialized
Jan 26 09:00:48 localhost kernel: SMBIOS 2.8 present.
Jan 26 09:00:48 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 26 09:00:48 localhost kernel: Hypervisor detected: KVM
Jan 26 09:00:48 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 26 09:00:48 localhost kernel: kvm-clock: using sched offset of 3519925351 cycles
Jan 26 09:00:48 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 26 09:00:48 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 26 09:00:48 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 26 09:00:48 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 26 09:00:48 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 26 09:00:48 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 26 09:00:48 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 26 09:00:48 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 26 09:00:48 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 26 09:00:48 localhost kernel: Using GB pages for direct mapping
Jan 26 09:00:48 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 26 09:00:48 localhost kernel: ACPI: Early table checksum verification disabled
Jan 26 09:00:48 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 26 09:00:48 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 09:00:48 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 09:00:48 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 09:00:48 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 26 09:00:48 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 09:00:48 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 09:00:48 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 26 09:00:48 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 26 09:00:48 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 26 09:00:48 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 26 09:00:48 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 26 09:00:48 localhost kernel: No NUMA configuration found
Jan 26 09:00:48 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 26 09:00:48 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 26 09:00:48 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 26 09:00:48 localhost kernel: Zone ranges:
Jan 26 09:00:48 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 26 09:00:48 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 26 09:00:48 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 09:00:48 localhost kernel:   Device   empty
Jan 26 09:00:48 localhost kernel: Movable zone start for each node
Jan 26 09:00:48 localhost kernel: Early memory node ranges
Jan 26 09:00:48 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 26 09:00:48 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 26 09:00:48 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 09:00:48 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 26 09:00:48 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 26 09:00:48 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 26 09:00:48 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 26 09:00:48 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 26 09:00:48 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 26 09:00:48 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 26 09:00:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 26 09:00:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 26 09:00:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 26 09:00:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 26 09:00:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 26 09:00:48 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 26 09:00:48 localhost kernel: TSC deadline timer available
Jan 26 09:00:48 localhost kernel: CPU topo: Max. logical packages:   8
Jan 26 09:00:48 localhost kernel: CPU topo: Max. logical dies:       8
Jan 26 09:00:48 localhost kernel: CPU topo: Max. dies per package:   1
Jan 26 09:00:48 localhost kernel: CPU topo: Max. threads per core:   1
Jan 26 09:00:48 localhost kernel: CPU topo: Num. cores per package:     1
Jan 26 09:00:48 localhost kernel: CPU topo: Num. threads per package:   1
Jan 26 09:00:48 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 26 09:00:48 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 26 09:00:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 26 09:00:48 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 26 09:00:48 localhost kernel: Booting paravirtualized kernel on KVM
Jan 26 09:00:48 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 26 09:00:48 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 26 09:00:48 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 26 09:00:48 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 26 09:00:48 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 26 09:00:48 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 26 09:00:48 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 09:00:48 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 26 09:00:48 localhost kernel: random: crng init done
Jan 26 09:00:48 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 26 09:00:48 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 26 09:00:48 localhost kernel: Fallback order for Node 0: 0 
Jan 26 09:00:48 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 26 09:00:48 localhost kernel: Policy zone: Normal
Jan 26 09:00:48 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 26 09:00:48 localhost kernel: software IO TLB: area num 8.
Jan 26 09:00:48 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 26 09:00:48 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 26 09:00:48 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 26 09:00:48 localhost kernel: Dynamic Preempt: voluntary
Jan 26 09:00:48 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 26 09:00:48 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 26 09:00:48 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 26 09:00:48 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 26 09:00:48 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 26 09:00:48 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 26 09:00:48 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 26 09:00:48 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 26 09:00:48 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 09:00:48 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 09:00:48 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 09:00:48 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 26 09:00:48 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 26 09:00:48 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 26 09:00:48 localhost kernel: Console: colour VGA+ 80x25
Jan 26 09:00:48 localhost kernel: printk: console [ttyS0] enabled
Jan 26 09:00:48 localhost kernel: ACPI: Core revision 20230331
Jan 26 09:00:48 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 26 09:00:48 localhost kernel: x2apic enabled
Jan 26 09:00:48 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 26 09:00:48 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 26 09:00:48 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 26 09:00:48 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 26 09:00:48 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 26 09:00:48 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 26 09:00:48 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 26 09:00:48 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 26 09:00:48 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 26 09:00:48 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 26 09:00:48 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 26 09:00:48 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 26 09:00:48 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 26 09:00:48 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 26 09:00:48 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 26 09:00:48 localhost kernel: x86/bugs: return thunk changed
Jan 26 09:00:48 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 26 09:00:48 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 26 09:00:48 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 26 09:00:48 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 26 09:00:48 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 26 09:00:48 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 26 09:00:48 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 26 09:00:48 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 26 09:00:48 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 26 09:00:48 localhost kernel: landlock: Up and running.
Jan 26 09:00:48 localhost kernel: Yama: becoming mindful.
Jan 26 09:00:48 localhost kernel: SELinux:  Initializing.
Jan 26 09:00:48 localhost kernel: LSM support for eBPF active
Jan 26 09:00:48 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 09:00:48 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 09:00:48 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 26 09:00:48 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 26 09:00:48 localhost kernel: ... version:                0
Jan 26 09:00:48 localhost kernel: ... bit width:              48
Jan 26 09:00:48 localhost kernel: ... generic registers:      6
Jan 26 09:00:48 localhost kernel: ... value mask:             0000ffffffffffff
Jan 26 09:00:48 localhost kernel: ... max period:             00007fffffffffff
Jan 26 09:00:48 localhost kernel: ... fixed-purpose events:   0
Jan 26 09:00:48 localhost kernel: ... event mask:             000000000000003f
Jan 26 09:00:48 localhost kernel: signal: max sigframe size: 1776
Jan 26 09:00:48 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 26 09:00:48 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 26 09:00:48 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 26 09:00:48 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 26 09:00:48 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 26 09:00:48 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 26 09:00:48 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 26 09:00:48 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 26 09:00:48 localhost kernel: Memory: 7763768K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618364K reserved, 0K cma-reserved)
Jan 26 09:00:48 localhost kernel: devtmpfs: initialized
Jan 26 09:00:48 localhost kernel: x86/mm: Memory block size: 128MB
Jan 26 09:00:48 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 26 09:00:48 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 26 09:00:48 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 26 09:00:48 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 26 09:00:48 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 26 09:00:48 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 26 09:00:48 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 26 09:00:48 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 26 09:00:48 localhost kernel: audit: type=2000 audit(1769418046.161:1): state=initialized audit_enabled=0 res=1
Jan 26 09:00:48 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 26 09:00:48 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 26 09:00:48 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 26 09:00:48 localhost kernel: cpuidle: using governor menu
Jan 26 09:00:48 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 26 09:00:48 localhost kernel: PCI: Using configuration type 1 for base access
Jan 26 09:00:48 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 26 09:00:48 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 26 09:00:48 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 26 09:00:48 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 26 09:00:48 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 26 09:00:48 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 26 09:00:48 localhost kernel: Demotion targets for Node 0: null
Jan 26 09:00:48 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 26 09:00:48 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 26 09:00:48 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 26 09:00:48 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 26 09:00:48 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 26 09:00:48 localhost kernel: ACPI: Interpreter enabled
Jan 26 09:00:48 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 26 09:00:48 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 26 09:00:48 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 26 09:00:48 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 26 09:00:48 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 26 09:00:48 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 26 09:00:48 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [3] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [4] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [5] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [6] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [7] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [8] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [9] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [10] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [11] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [12] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [13] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [14] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [15] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [16] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [17] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [18] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [19] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [20] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [21] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [22] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [23] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [24] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [25] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [26] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [27] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [28] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [29] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [30] registered
Jan 26 09:00:48 localhost kernel: acpiphp: Slot [31] registered
Jan 26 09:00:48 localhost kernel: PCI host bridge to bus 0000:00
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 26 09:00:48 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 26 09:00:48 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 26 09:00:48 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 26 09:00:48 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 26 09:00:48 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 26 09:00:48 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 26 09:00:48 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 26 09:00:48 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 26 09:00:48 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 26 09:00:48 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 26 09:00:48 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 09:00:48 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 26 09:00:48 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 26 09:00:48 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 26 09:00:48 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 26 09:00:48 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 26 09:00:48 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 26 09:00:48 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 26 09:00:48 localhost kernel: iommu: Default domain type: Translated
Jan 26 09:00:48 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 26 09:00:48 localhost kernel: SCSI subsystem initialized
Jan 26 09:00:48 localhost kernel: ACPI: bus type USB registered
Jan 26 09:00:48 localhost kernel: usbcore: registered new interface driver usbfs
Jan 26 09:00:48 localhost kernel: usbcore: registered new interface driver hub
Jan 26 09:00:48 localhost kernel: usbcore: registered new device driver usb
Jan 26 09:00:48 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 26 09:00:48 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 26 09:00:48 localhost kernel: PTP clock support registered
Jan 26 09:00:48 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 26 09:00:48 localhost kernel: NetLabel: Initializing
Jan 26 09:00:48 localhost kernel: NetLabel:  domain hash size = 128
Jan 26 09:00:48 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 26 09:00:48 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 26 09:00:48 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 26 09:00:48 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 26 09:00:48 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 26 09:00:48 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 26 09:00:48 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 26 09:00:48 localhost kernel: vgaarb: loaded
Jan 26 09:00:48 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 26 09:00:48 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 26 09:00:48 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 26 09:00:48 localhost kernel: pnp: PnP ACPI init
Jan 26 09:00:48 localhost kernel: pnp 00:03: [dma 2]
Jan 26 09:00:48 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 26 09:00:48 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 26 09:00:48 localhost kernel: NET: Registered PF_INET protocol family
Jan 26 09:00:48 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 26 09:00:48 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 26 09:00:48 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 26 09:00:48 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 26 09:00:48 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 26 09:00:48 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 26 09:00:48 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 26 09:00:48 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 09:00:48 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 09:00:48 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 26 09:00:48 localhost kernel: NET: Registered PF_XDP protocol family
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 26 09:00:48 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 26 09:00:48 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 26 09:00:48 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 26 09:00:48 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 93877 usecs
Jan 26 09:00:48 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 26 09:00:48 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 26 09:00:48 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 26 09:00:48 localhost kernel: ACPI: bus type thunderbolt registered
Jan 26 09:00:48 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 26 09:00:48 localhost kernel: Initialise system trusted keyrings
Jan 26 09:00:48 localhost kernel: Key type blacklist registered
Jan 26 09:00:48 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 26 09:00:48 localhost kernel: zbud: loaded
Jan 26 09:00:48 localhost kernel: integrity: Platform Keyring initialized
Jan 26 09:00:48 localhost kernel: integrity: Machine keyring initialized
Jan 26 09:00:48 localhost kernel: Freeing initrd memory: 87956K
Jan 26 09:00:48 localhost kernel: NET: Registered PF_ALG protocol family
Jan 26 09:00:48 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 26 09:00:48 localhost kernel: Key type asymmetric registered
Jan 26 09:00:48 localhost kernel: Asymmetric key parser 'x509' registered
Jan 26 09:00:48 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 26 09:00:48 localhost kernel: io scheduler mq-deadline registered
Jan 26 09:00:48 localhost kernel: io scheduler kyber registered
Jan 26 09:00:48 localhost kernel: io scheduler bfq registered
Jan 26 09:00:48 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 26 09:00:48 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 26 09:00:48 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 26 09:00:48 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 26 09:00:48 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 26 09:00:48 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 26 09:00:48 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 26 09:00:48 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 26 09:00:48 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 26 09:00:48 localhost kernel: Non-volatile memory driver v1.3
Jan 26 09:00:48 localhost kernel: rdac: device handler registered
Jan 26 09:00:48 localhost kernel: hp_sw: device handler registered
Jan 26 09:00:48 localhost kernel: emc: device handler registered
Jan 26 09:00:48 localhost kernel: alua: device handler registered
Jan 26 09:00:48 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 26 09:00:48 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 26 09:00:48 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 26 09:00:48 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 26 09:00:48 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 26 09:00:48 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 26 09:00:48 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 26 09:00:48 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 26 09:00:48 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 26 09:00:48 localhost kernel: hub 1-0:1.0: USB hub found
Jan 26 09:00:48 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 26 09:00:48 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 26 09:00:48 localhost kernel: usbserial: USB Serial support registered for generic
Jan 26 09:00:48 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 26 09:00:48 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 26 09:00:48 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 26 09:00:48 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 26 09:00:48 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 26 09:00:48 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 26 09:00:48 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 26 09:00:48 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 26 09:00:48 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-26T09:00:47 UTC (1769418047)
Jan 26 09:00:48 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 26 09:00:48 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 26 09:00:48 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 26 09:00:48 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 26 09:00:48 localhost kernel: usbcore: registered new interface driver usbhid
Jan 26 09:00:48 localhost kernel: usbhid: USB HID core driver
Jan 26 09:00:48 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 26 09:00:48 localhost kernel: Initializing XFRM netlink socket
Jan 26 09:00:48 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 26 09:00:48 localhost kernel: Segment Routing with IPv6
Jan 26 09:00:48 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 26 09:00:48 localhost kernel: mpls_gso: MPLS GSO support
Jan 26 09:00:48 localhost kernel: IPI shorthand broadcast: enabled
Jan 26 09:00:48 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 26 09:00:48 localhost kernel: AES CTR mode by8 optimization enabled
Jan 26 09:00:48 localhost kernel: sched_clock: Marking stable (1263002674, 147793995)->(1523584899, -112788230)
Jan 26 09:00:48 localhost kernel: registered taskstats version 1
Jan 26 09:00:48 localhost kernel: Loading compiled-in X.509 certificates
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 26 09:00:48 localhost kernel: Demotion targets for Node 0: null
Jan 26 09:00:48 localhost kernel: page_owner is disabled
Jan 26 09:00:48 localhost kernel: Key type .fscrypt registered
Jan 26 09:00:48 localhost kernel: Key type fscrypt-provisioning registered
Jan 26 09:00:48 localhost kernel: Key type big_key registered
Jan 26 09:00:48 localhost kernel: Key type encrypted registered
Jan 26 09:00:48 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 26 09:00:48 localhost kernel: Loading compiled-in module X.509 certificates
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 09:00:48 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 26 09:00:48 localhost kernel: ima: No architecture policies found
Jan 26 09:00:48 localhost kernel: evm: Initialising EVM extended attributes:
Jan 26 09:00:48 localhost kernel: evm: security.selinux
Jan 26 09:00:48 localhost kernel: evm: security.SMACK64 (disabled)
Jan 26 09:00:48 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 26 09:00:48 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 26 09:00:48 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 26 09:00:48 localhost kernel: evm: security.apparmor (disabled)
Jan 26 09:00:48 localhost kernel: evm: security.ima
Jan 26 09:00:48 localhost kernel: evm: security.capability
Jan 26 09:00:48 localhost kernel: evm: HMAC attrs: 0x1
Jan 26 09:00:48 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 26 09:00:48 localhost kernel: Running certificate verification RSA selftest
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 26 09:00:48 localhost kernel: Running certificate verification ECDSA selftest
Jan 26 09:00:48 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 26 09:00:48 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 26 09:00:48 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 26 09:00:48 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 26 09:00:48 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 26 09:00:48 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 26 09:00:48 localhost kernel: clk: Disabling unused clocks
Jan 26 09:00:48 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 26 09:00:48 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 26 09:00:48 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 26 09:00:48 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 26 09:00:48 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 26 09:00:48 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 26 09:00:48 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 26 09:00:48 localhost kernel: Run /init as init process
Jan 26 09:00:48 localhost kernel:   with arguments:
Jan 26 09:00:48 localhost kernel:     /init
Jan 26 09:00:48 localhost kernel:   with environment:
Jan 26 09:00:48 localhost kernel:     HOME=/
Jan 26 09:00:48 localhost kernel:     TERM=linux
Jan 26 09:00:48 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 26 09:00:48 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 26 09:00:48 localhost systemd[1]: Detected virtualization kvm.
Jan 26 09:00:48 localhost systemd[1]: Detected architecture x86-64.
Jan 26 09:00:48 localhost systemd[1]: Running in initrd.
Jan 26 09:00:48 localhost systemd[1]: No hostname configured, using default hostname.
Jan 26 09:00:48 localhost systemd[1]: Hostname set to <localhost>.
Jan 26 09:00:48 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 26 09:00:48 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 26 09:00:48 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 26 09:00:48 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 26 09:00:48 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 26 09:00:48 localhost systemd[1]: Reached target Local File Systems.
Jan 26 09:00:48 localhost systemd[1]: Reached target Path Units.
Jan 26 09:00:48 localhost systemd[1]: Reached target Slice Units.
Jan 26 09:00:48 localhost systemd[1]: Reached target Swaps.
Jan 26 09:00:48 localhost systemd[1]: Reached target Timer Units.
Jan 26 09:00:48 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 26 09:00:48 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 26 09:00:48 localhost systemd[1]: Listening on Journal Socket.
Jan 26 09:00:48 localhost systemd[1]: Listening on udev Control Socket.
Jan 26 09:00:48 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 26 09:00:48 localhost systemd[1]: Reached target Socket Units.
Jan 26 09:00:48 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 26 09:00:48 localhost systemd[1]: Starting Journal Service...
Jan 26 09:00:48 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 26 09:00:48 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 26 09:00:48 localhost systemd[1]: Starting Create System Users...
Jan 26 09:00:48 localhost systemd[1]: Starting Setup Virtual Console...
Jan 26 09:00:48 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 26 09:00:48 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 26 09:00:48 localhost systemd[1]: Finished Create System Users.
Jan 26 09:00:48 localhost systemd-journald[305]: Journal started
Jan 26 09:00:48 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/e1437fe8638e4e57ae56ce26d7011781) is 8.0M, max 153.6M, 145.6M free.
Jan 26 09:00:48 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 26 09:00:48 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 26 09:00:48 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 26 09:00:48 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 26 09:00:48 localhost systemd[1]: Started Journal Service.
Jan 26 09:00:48 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 26 09:00:48 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 26 09:00:48 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 26 09:00:48 localhost systemd[1]: Finished Setup Virtual Console.
Jan 26 09:00:48 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 26 09:00:48 localhost systemd[1]: Starting dracut cmdline hook...
Jan 26 09:00:48 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Jan 26 09:00:48 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 09:00:48 localhost systemd[1]: Finished dracut cmdline hook.
Jan 26 09:00:48 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 26 09:00:48 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 26 09:00:48 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 26 09:00:48 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 26 09:00:48 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 26 09:00:48 localhost kernel: RPC: Registered udp transport module.
Jan 26 09:00:48 localhost kernel: RPC: Registered tcp transport module.
Jan 26 09:00:48 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 26 09:00:48 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 26 09:00:48 localhost rpc.statd[444]: Version 2.5.4 starting
Jan 26 09:00:48 localhost rpc.statd[444]: Initializing NSM state
Jan 26 09:00:48 localhost rpc.idmapd[449]: Setting log level to 0
Jan 26 09:00:48 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 26 09:00:48 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 26 09:00:48 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 26 09:00:48 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 26 09:00:48 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 26 09:00:49 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 26 09:00:49 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 26 09:00:49 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 26 09:00:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 26 09:00:49 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 26 09:00:49 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 09:00:49 localhost systemd[1]: Reached target Network.
Jan 26 09:00:49 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 09:00:49 localhost systemd[1]: Starting dracut initqueue hook...
Jan 26 09:00:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 09:00:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 26 09:00:49 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 26 09:00:49 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 26 09:00:49 localhost kernel:  vda: vda1
Jan 26 09:00:49 localhost systemd-udevd[503]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 09:00:49 localhost kernel: libata version 3.00 loaded.
Jan 26 09:00:49 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 26 09:00:49 localhost kernel: scsi host0: ata_piix
Jan 26 09:00:49 localhost kernel: scsi host1: ata_piix
Jan 26 09:00:49 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 26 09:00:49 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 26 09:00:49 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 26 09:00:49 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 26 09:00:49 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 09:00:49 localhost systemd[1]: Reached target Initrd Root Device.
Jan 26 09:00:49 localhost systemd[1]: Reached target System Initialization.
Jan 26 09:00:49 localhost systemd[1]: Reached target Basic System.
Jan 26 09:00:49 localhost kernel: ata1: found unknown device (class 0)
Jan 26 09:00:49 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 26 09:00:49 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 26 09:00:49 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 26 09:00:49 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 26 09:00:49 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 26 09:00:49 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 26 09:00:49 localhost systemd[1]: Finished dracut initqueue hook.
Jan 26 09:00:49 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 26 09:00:49 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 26 09:00:49 localhost systemd[1]: Reached target Remote File Systems.
Jan 26 09:00:49 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 26 09:00:49 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 26 09:00:49 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 26 09:00:49 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Jan 26 09:00:49 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 09:00:49 localhost systemd[1]: Mounting /sysroot...
Jan 26 09:00:50 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 26 09:00:50 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 26 09:00:50 localhost kernel: XFS (vda1): Ending clean mount
Jan 26 09:00:50 localhost systemd[1]: Mounted /sysroot.
Jan 26 09:00:50 localhost systemd[1]: Reached target Initrd Root File System.
Jan 26 09:00:50 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 26 09:00:50 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 26 09:00:50 localhost systemd[1]: Reached target Initrd File Systems.
Jan 26 09:00:50 localhost systemd[1]: Reached target Initrd Default Target.
Jan 26 09:00:50 localhost systemd[1]: Starting dracut mount hook...
Jan 26 09:00:50 localhost systemd[1]: Finished dracut mount hook.
Jan 26 09:00:50 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 26 09:00:50 localhost rpc.idmapd[449]: exiting on signal 15
Jan 26 09:00:50 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 26 09:00:50 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 26 09:00:50 localhost systemd[1]: Stopped target Network.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Timer Units.
Jan 26 09:00:50 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 26 09:00:50 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Basic System.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Path Units.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Remote File Systems.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Slice Units.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Socket Units.
Jan 26 09:00:50 localhost systemd[1]: Stopped target System Initialization.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Local File Systems.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Swaps.
Jan 26 09:00:50 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped dracut mount hook.
Jan 26 09:00:50 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 26 09:00:50 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 26 09:00:50 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 26 09:00:50 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 26 09:00:50 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 26 09:00:50 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 26 09:00:50 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 26 09:00:50 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 26 09:00:50 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 26 09:00:50 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 26 09:00:50 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 26 09:00:50 localhost systemd[1]: systemd-udevd.service: Consumed 1.008s CPU time.
Jan 26 09:00:50 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 26 09:00:50 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Closed udev Control Socket.
Jan 26 09:00:50 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Closed udev Kernel Socket.
Jan 26 09:00:50 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 26 09:00:50 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 26 09:00:50 localhost systemd[1]: Starting Cleanup udev Database...
Jan 26 09:00:50 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 26 09:00:50 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 26 09:00:50 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Stopped Create System Users.
Jan 26 09:00:50 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 26 09:00:50 localhost systemd[1]: Finished Cleanup udev Database.
Jan 26 09:00:50 localhost systemd[1]: Reached target Switch Root.
Jan 26 09:00:50 localhost systemd[1]: Starting Switch Root...
Jan 26 09:00:50 localhost systemd[1]: Switching root.
Jan 26 09:00:50 localhost systemd-journald[305]: Journal stopped
Jan 26 09:00:51 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Jan 26 09:00:51 localhost kernel: audit: type=1404 audit(1769418050.552:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 26 09:00:51 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:00:51 localhost kernel: SELinux:  policy capability open_perms=1
Jan 26 09:00:51 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:00:51 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:00:51 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:00:51 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:00:51 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:00:51 localhost kernel: audit: type=1403 audit(1769418050.685:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 26 09:00:51 localhost systemd[1]: Successfully loaded SELinux policy in 136.612ms.
Jan 26 09:00:51 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.488ms.
Jan 26 09:00:51 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 26 09:00:51 localhost systemd[1]: Detected virtualization kvm.
Jan 26 09:00:51 localhost systemd[1]: Detected architecture x86-64.
Jan 26 09:00:51 localhost systemd-rc-local-generator[641]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:00:51 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Stopped Switch Root.
Jan 26 09:00:51 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 26 09:00:51 localhost systemd[1]: Created slice Slice /system/getty.
Jan 26 09:00:51 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 26 09:00:51 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 26 09:00:51 localhost systemd[1]: Created slice User and Session Slice.
Jan 26 09:00:51 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 26 09:00:51 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 26 09:00:51 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 26 09:00:51 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 26 09:00:51 localhost systemd[1]: Stopped target Switch Root.
Jan 26 09:00:51 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 26 09:00:51 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 26 09:00:51 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 26 09:00:51 localhost systemd[1]: Reached target Path Units.
Jan 26 09:00:51 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 26 09:00:51 localhost systemd[1]: Reached target Slice Units.
Jan 26 09:00:51 localhost systemd[1]: Reached target Swaps.
Jan 26 09:00:51 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 26 09:00:51 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 26 09:00:51 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 26 09:00:51 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 26 09:00:51 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 26 09:00:51 localhost systemd[1]: Listening on udev Control Socket.
Jan 26 09:00:51 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 26 09:00:51 localhost systemd[1]: Mounting Huge Pages File System...
Jan 26 09:00:51 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 26 09:00:51 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 26 09:00:51 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 26 09:00:51 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 26 09:00:51 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 26 09:00:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 26 09:00:51 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 26 09:00:51 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 26 09:00:51 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 26 09:00:51 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 26 09:00:51 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 26 09:00:51 localhost systemd[1]: Stopped Journal Service.
Jan 26 09:00:51 localhost systemd[1]: Starting Journal Service...
Jan 26 09:00:51 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 26 09:00:51 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 26 09:00:51 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 09:00:51 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 26 09:00:51 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 26 09:00:51 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 26 09:00:51 localhost kernel: fuse: init (API version 7.37)
Jan 26 09:00:51 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 26 09:00:51 localhost systemd[1]: Mounted Huge Pages File System.
Jan 26 09:00:51 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 26 09:00:51 localhost systemd-journald[682]: Journal started
Jan 26 09:00:51 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 26 09:00:50 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 26 09:00:50 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Started Journal Service.
Jan 26 09:00:51 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 26 09:00:51 localhost kernel: ACPI: bus type drm_connector registered
Jan 26 09:00:51 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 26 09:00:51 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 26 09:00:51 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 26 09:00:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 26 09:00:51 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 26 09:00:51 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 26 09:00:51 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 26 09:00:51 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 26 09:00:51 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 26 09:00:51 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 26 09:00:51 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 26 09:00:51 localhost systemd[1]: Mounting FUSE Control File System...
Jan 26 09:00:51 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 26 09:00:51 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 26 09:00:51 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 26 09:00:51 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 26 09:00:51 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 26 09:00:51 localhost systemd[1]: Starting Create System Users...
Jan 26 09:00:51 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 26 09:00:51 localhost systemd-journald[682]: Received client request to flush runtime journal.
Jan 26 09:00:51 localhost systemd[1]: Mounted FUSE Control File System.
Jan 26 09:00:51 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 26 09:00:51 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 26 09:00:51 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 26 09:00:51 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 26 09:00:51 localhost systemd[1]: Finished Create System Users.
Jan 26 09:00:51 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 26 09:00:51 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 26 09:00:51 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 26 09:00:51 localhost systemd[1]: Reached target Local File Systems.
Jan 26 09:00:51 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 26 09:00:51 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 26 09:00:51 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 26 09:00:51 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 26 09:00:51 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 26 09:00:51 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 26 09:00:51 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 26 09:00:51 localhost bootctl[699]: Couldn't find EFI system partition, skipping.
Jan 26 09:00:51 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 26 09:00:51 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 26 09:00:51 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 26 09:00:51 localhost systemd[1]: Starting Security Auditing Service...
Jan 26 09:00:51 localhost systemd[1]: Starting RPC Bind...
Jan 26 09:00:51 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 26 09:00:51 localhost auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 26 09:00:51 localhost auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 26 09:00:51 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 26 09:00:51 localhost systemd[1]: Started RPC Bind.
Jan 26 09:00:51 localhost augenrules[710]: /sbin/augenrules: No change
Jan 26 09:00:51 localhost augenrules[725]: No rules
Jan 26 09:00:51 localhost augenrules[725]: enabled 1
Jan 26 09:00:51 localhost augenrules[725]: failure 1
Jan 26 09:00:51 localhost augenrules[725]: pid 705
Jan 26 09:00:51 localhost augenrules[725]: rate_limit 0
Jan 26 09:00:51 localhost augenrules[725]: backlog_limit 8192
Jan 26 09:00:51 localhost augenrules[725]: lost 0
Jan 26 09:00:51 localhost augenrules[725]: backlog 0
Jan 26 09:00:51 localhost augenrules[725]: backlog_wait_time 60000
Jan 26 09:00:51 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 26 09:00:51 localhost augenrules[725]: enabled 1
Jan 26 09:00:51 localhost augenrules[725]: failure 1
Jan 26 09:00:51 localhost augenrules[725]: pid 705
Jan 26 09:00:51 localhost augenrules[725]: rate_limit 0
Jan 26 09:00:51 localhost augenrules[725]: backlog_limit 8192
Jan 26 09:00:51 localhost augenrules[725]: lost 0
Jan 26 09:00:51 localhost augenrules[725]: backlog 0
Jan 26 09:00:51 localhost augenrules[725]: backlog_wait_time 60000
Jan 26 09:00:51 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 26 09:00:51 localhost augenrules[725]: enabled 1
Jan 26 09:00:51 localhost augenrules[725]: failure 1
Jan 26 09:00:51 localhost augenrules[725]: pid 705
Jan 26 09:00:51 localhost augenrules[725]: rate_limit 0
Jan 26 09:00:51 localhost augenrules[725]: backlog_limit 8192
Jan 26 09:00:51 localhost augenrules[725]: lost 0
Jan 26 09:00:51 localhost augenrules[725]: backlog 0
Jan 26 09:00:51 localhost augenrules[725]: backlog_wait_time 60000
Jan 26 09:00:51 localhost augenrules[725]: backlog_wait_time_actual 0
Jan 26 09:00:51 localhost systemd[1]: Started Security Auditing Service.
Jan 26 09:00:51 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 26 09:00:51 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 26 09:00:51 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 26 09:00:51 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 26 09:00:51 localhost systemd[1]: Starting Update is Completed...
Jan 26 09:00:51 localhost systemd[1]: Finished Update is Completed.
Jan 26 09:00:51 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Jan 26 09:00:51 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 26 09:00:51 localhost systemd[1]: Reached target System Initialization.
Jan 26 09:00:51 localhost systemd[1]: Started dnf makecache --timer.
Jan 26 09:00:51 localhost systemd[1]: Started Daily rotation of log files.
Jan 26 09:00:51 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 26 09:00:51 localhost systemd[1]: Reached target Timer Units.
Jan 26 09:00:51 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 26 09:00:51 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 26 09:00:51 localhost systemd[1]: Reached target Socket Units.
Jan 26 09:00:51 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 26 09:00:51 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 09:00:51 localhost systemd-udevd[742]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 09:00:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 26 09:00:51 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 26 09:00:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 09:00:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 26 09:00:52 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 26 09:00:52 localhost systemd[1]: Reached target Basic System.
Jan 26 09:00:52 localhost dbus-broker-lau[770]: Ready
Jan 26 09:00:52 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 26 09:00:52 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 26 09:00:52 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 26 09:00:52 localhost systemd[1]: Starting NTP client/server...
Jan 26 09:00:52 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 26 09:00:52 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 26 09:00:52 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 26 09:00:52 localhost systemd[1]: Started irqbalance daemon.
Jan 26 09:00:52 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 26 09:00:52 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 09:00:52 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 09:00:52 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 09:00:52 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 26 09:00:52 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 26 09:00:52 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 26 09:00:52 localhost systemd[1]: Starting User Login Management...
Jan 26 09:00:52 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 26 09:00:52 localhost chronyd[797]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 26 09:00:52 localhost chronyd[797]: Loaded 0 symmetric keys
Jan 26 09:00:52 localhost chronyd[797]: Using right/UTC timezone to obtain leap second data
Jan 26 09:00:52 localhost chronyd[797]: Loaded seccomp filter (level 2)
Jan 26 09:00:52 localhost systemd[1]: Started NTP client/server.
Jan 26 09:00:52 localhost systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 26 09:00:52 localhost systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 26 09:00:52 localhost systemd-logind[787]: New seat seat0.
Jan 26 09:00:52 localhost systemd[1]: Started User Login Management.
Jan 26 09:00:52 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 26 09:00:52 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 26 09:00:52 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 26 09:00:52 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 26 09:00:52 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 26 09:00:52 localhost kernel: Console: switching to colour dummy device 80x25
Jan 26 09:00:52 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 26 09:00:52 localhost kernel: [drm] features: -context_init
Jan 26 09:00:52 localhost kernel: [drm] number of scanouts: 1
Jan 26 09:00:52 localhost kernel: [drm] number of cap sets: 0
Jan 26 09:00:52 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 26 09:00:52 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 26 09:00:52 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 26 09:00:52 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 26 09:00:52 localhost kernel: kvm_amd: TSC scaling supported
Jan 26 09:00:52 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 26 09:00:52 localhost kernel: kvm_amd: Nested Paging enabled
Jan 26 09:00:52 localhost kernel: kvm_amd: LBR virtualization supported
Jan 26 09:00:52 localhost iptables.init[782]: iptables: Applying firewall rules: [  OK  ]
Jan 26 09:00:52 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 26 09:00:52 localhost cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 26 Jan 2026 09:00:52 +0000. Up 6.22 seconds.
Jan 26 09:00:52 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 26 09:00:52 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 26 09:00:52 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpgm4nqoba.mount: Deactivated successfully.
Jan 26 09:00:53 localhost systemd[1]: Starting Hostname Service...
Jan 26 09:00:53 localhost systemd[1]: Started Hostname Service.
Jan 26 09:00:53 np0005595444.novalocal systemd-hostnamed[855]: Hostname set to <np0005595444.novalocal> (static)
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Reached target Preparation for Network.
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Starting Network Manager...
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.2775] NetworkManager (version 1.54.3-2.el9) is starting... (boot:86f8f4d3-c158-4ddc-89d7-e9942bcd416d)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.2780] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.2865] manager[0x55dc95014000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.2905] hostname: hostname: using hostnamed
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.2906] hostname: static hostname changed from (none) to "np0005595444.novalocal"
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.2910] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3010] manager[0x55dc95014000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3011] manager[0x55dc95014000]: rfkill: WWAN hardware radio set enabled
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3121] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3123] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3124] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3125] manager: Networking is enabled by state file
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3130] settings: Loaded settings plugin: keyfile (internal)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3147] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3181] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3200] dhcp: init: Using DHCP client 'internal'
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3204] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3226] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3238] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3250] device (lo): Activation: starting connection 'lo' (4612cff0-21ca-45d4-990a-e6a88a7d7afa)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3263] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3270] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3302] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3318] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3322] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3324] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3326] device (eth0): carrier: link connected
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3329] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Started Network Manager.
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3335] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3347] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Reached target Network.
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3350] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3352] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3354] manager: NetworkManager state is now CONNECTING
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3356] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3363] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3366] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3494] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3497] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.3502] device (lo): Activation: successful, device activated.
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Reached target NFS client services.
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Reached target Remote File Systems.
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6173] dhcp4 (eth0): state changed new lease, address=38.102.83.230
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6191] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6221] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6246] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6248] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6257] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6265] device (eth0): Activation: successful, device activated.
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6273] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 09:00:53 np0005595444.novalocal NetworkManager[860]: <info>  [1769418053.6280] manager: startup complete
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 26 09:00:53 np0005595444.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 26 Jan 2026 09:00:53 +0000. Up 7.62 seconds.
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.230         | 255.255.255.0 | global | fa:16:3e:90:59:0d |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe90:590d/64 |       .       |  link  | fa:16:3e:90:59:0d |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 26 09:00:53 np0005595444.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 26 09:00:54 np0005595444.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 09:00:54 np0005595444.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Jan 26 09:00:54 np0005595444.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 26 09:00:54 np0005595444.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Jan 26 09:00:54 np0005595444.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Jan 26 09:00:54 np0005595444.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Jan 26 09:00:54 np0005595444.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Generating public/private rsa key pair.
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: The key fingerprint is:
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: SHA256:n3cHr4qPhx5tKlrL2W7Bboj/GA6BAaV3o5Uo0kUDYKo root@np0005595444.novalocal
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: The key's randomart image is:
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: +---[RSA 3072]----+
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |.oo+=            |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |o. + o .         |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |o + + =          |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |.. o * .         |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |E   o . S.    .  |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |       . .oo   o |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |      ..oo+o+ . o|
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |      .=o*=B.. o |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |      .oBBOoo..  |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: The key fingerprint is:
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: SHA256:Pes7dl2rLNRGrxjX+Dh33PoVMDqvSvfJlAj3I46DGPE root@np0005595444.novalocal
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: The key's randomart image is:
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: +---[ECDSA 256]---+
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |                 |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |                 |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |             o   |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |     .   .  ..o  |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |      o S.o+o +. |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |     . E  o===.oo|
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |      o ..o+=B+o+|
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |     . ..o*oO+++=|
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |         +=*.*=oo|
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: The key fingerprint is:
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: SHA256:O3dwqmOer3OhWRFFxA5p25Fiy68rXkR8Qj6wOCCxTYA root@np0005595444.novalocal
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: The key's randomart image is:
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: +--[ED25519 256]--+
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: | .+oo   . .*+.   |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |E  = . . *B +    |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |  . . o .+*O..   |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |       . .=+o    |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |        S oo.    |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |         oo+.    |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |        o++o.    |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |        B*o.     |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: |       +*Bo.     |
Jan 26 09:00:55 np0005595444.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Reached target Network is Online.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting System Logging Service...
Jan 26 09:00:55 np0005595444.novalocal sm-notify[1006]: Version 2.5.4 starting
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting Permit User Sessions...
Jan 26 09:00:55 np0005595444.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 26 09:00:55 np0005595444.novalocal sshd[1008]: Server listening on :: port 22.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Finished Permit User Sessions.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Started Command Scheduler.
Jan 26 09:00:55 np0005595444.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 26 09:00:55 np0005595444.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Started Getty on tty1.
Jan 26 09:00:55 np0005595444.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Jan 26 09:00:55 np0005595444.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 26 09:00:55 np0005595444.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 44% if used.)
Jan 26 09:00:55 np0005595444.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Reached target Login Prompts.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Started System Logging Service.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Reached target Multi-User System.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 26 09:00:55 np0005595444.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:00:55 np0005595444.novalocal kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Jan 26 09:00:55 np0005595444.novalocal kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 26 09:00:55 np0005595444.novalocal cloud-init[1112]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 26 Jan 2026 09:00:55 +0000. Up 9.58 seconds.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 26 09:00:55 np0005595444.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 26 09:00:56 np0005595444.novalocal dracut[1270]: dracut-057-102.git20250818.el9
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1271]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 26 Jan 2026 09:00:56 +0000. Up 9.95 seconds.
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1288]: #############################################################
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1289]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1291]: 256 SHA256:Pes7dl2rLNRGrxjX+Dh33PoVMDqvSvfJlAj3I46DGPE root@np0005595444.novalocal (ECDSA)
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1293]: 256 SHA256:O3dwqmOer3OhWRFFxA5p25Fiy68rXkR8Qj6wOCCxTYA root@np0005595444.novalocal (ED25519)
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1295]: 3072 SHA256:n3cHr4qPhx5tKlrL2W7Bboj/GA6BAaV3o5Uo0kUDYKo root@np0005595444.novalocal (RSA)
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1296]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1297]: #############################################################
Jan 26 09:00:56 np0005595444.novalocal cloud-init[1271]: Cloud-init v. 24.4-8.el9 finished at Mon, 26 Jan 2026 09:00:56 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.14 seconds
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 26 09:00:56 np0005595444.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 26 09:00:56 np0005595444.novalocal systemd[1]: Reached target Cloud-init target.
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1357]: Unable to negotiate with 38.102.83.114 port 38362: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1360]: Connection reset by 38.102.83.114 port 38364 [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1365]: Unable to negotiate with 38.102.83.114 port 38370: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1370]: Unable to negotiate with 38.102.83.114 port 38372: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1353]: Connection closed by 38.102.83.114 port 38356 [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1380]: Connection reset by 38.102.83.114 port 38390 [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1385]: Unable to negotiate with 38.102.83.114 port 38402: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1390]: Unable to negotiate with 38.102.83.114 port 38406: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 26 09:00:56 np0005595444.novalocal sshd-session[1375]: Connection closed by 38.102.83.114 port 38382 [preauth]
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 26 09:00:56 np0005595444.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: memstrack is not available
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: memstrack is not available
Jan 26 09:00:57 np0005595444.novalocal dracut[1273]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 26 09:00:58 np0005595444.novalocal dracut[1273]: *** Including module: systemd ***
Jan 26 09:00:58 np0005595444.novalocal dracut[1273]: *** Including module: fips ***
Jan 26 09:00:58 np0005595444.novalocal chronyd[797]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Jan 26 09:00:58 np0005595444.novalocal chronyd[797]: System clock TAI offset set to 37 seconds
Jan 26 09:00:58 np0005595444.novalocal dracut[1273]: *** Including module: systemd-initrd ***
Jan 26 09:00:58 np0005595444.novalocal dracut[1273]: *** Including module: i18n ***
Jan 26 09:00:58 np0005595444.novalocal dracut[1273]: *** Including module: drm ***
Jan 26 09:00:59 np0005595444.novalocal dracut[1273]: *** Including module: prefixdevname ***
Jan 26 09:00:59 np0005595444.novalocal dracut[1273]: *** Including module: kernel-modules ***
Jan 26 09:00:59 np0005595444.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]: *** Including module: kernel-modules-extra ***
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]: *** Including module: qemu ***
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]: *** Including module: fstab-sys ***
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]: *** Including module: rootfs-block ***
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]: *** Including module: terminfo ***
Jan 26 09:01:00 np0005595444.novalocal dracut[1273]: *** Including module: udev-rules ***
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: Skipping udev rule: 91-permissions.rules
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: *** Including module: virtiofs ***
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: *** Including module: dracut-systemd ***
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: *** Including module: usrmount ***
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: *** Including module: base ***
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: *** Including module: fs-lib ***
Jan 26 09:01:01 np0005595444.novalocal dracut[1273]: *** Including module: kdumpbase ***
Jan 26 09:01:01 np0005595444.novalocal CROND[2702]: (root) CMD (run-parts /etc/cron.hourly)
Jan 26 09:01:01 np0005595444.novalocal run-parts[2711]: (/etc/cron.hourly) starting 0anacron
Jan 26 09:01:01 np0005595444.novalocal anacron[2726]: Anacron started on 2026-01-26
Jan 26 09:01:01 np0005595444.novalocal anacron[2726]: Will run job `cron.daily' in 16 min.
Jan 26 09:01:01 np0005595444.novalocal anacron[2726]: Will run job `cron.weekly' in 36 min.
Jan 26 09:01:01 np0005595444.novalocal anacron[2726]: Will run job `cron.monthly' in 56 min.
Jan 26 09:01:01 np0005595444.novalocal anacron[2726]: Jobs will be executed sequentially
Jan 26 09:01:01 np0005595444.novalocal run-parts[2729]: (/etc/cron.hourly) finished 0anacron
Jan 26 09:01:01 np0005595444.novalocal CROND[2699]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:   microcode_ctl module: mangling fw_dir
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: IRQ 25 affinity is now unmanaged
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: IRQ 31 affinity is now unmanaged
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: IRQ 28 affinity is now unmanaged
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: IRQ 32 affinity is now unmanaged
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: IRQ 30 affinity is now unmanaged
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 26 09:01:02 np0005595444.novalocal irqbalance[783]: IRQ 29 affinity is now unmanaged
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]: *** Including module: openssl ***
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]: *** Including module: shutdown ***
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]: *** Including module: squash ***
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]: *** Including modules done ***
Jan 26 09:01:02 np0005595444.novalocal dracut[1273]: *** Installing kernel module dependencies ***
Jan 26 09:01:03 np0005595444.novalocal dracut[1273]: *** Installing kernel module dependencies done ***
Jan 26 09:01:03 np0005595444.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 09:01:03 np0005595444.novalocal dracut[1273]: *** Resolving executable dependencies ***
Jan 26 09:01:05 np0005595444.novalocal dracut[1273]: *** Resolving executable dependencies done ***
Jan 26 09:01:05 np0005595444.novalocal dracut[1273]: *** Generating early-microcode cpio image ***
Jan 26 09:01:05 np0005595444.novalocal dracut[1273]: *** Store current command line parameters ***
Jan 26 09:01:05 np0005595444.novalocal dracut[1273]: Stored kernel commandline:
Jan 26 09:01:05 np0005595444.novalocal dracut[1273]: No dracut internal kernel commandline stored in the initramfs
Jan 26 09:01:05 np0005595444.novalocal dracut[1273]: *** Install squash loader ***
Jan 26 09:01:06 np0005595444.novalocal dracut[1273]: *** Squashing the files inside the initramfs ***
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: *** Squashing the files inside the initramfs done ***
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: *** Hardlinking files ***
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: Mode:           real
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: Files:          50
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: Linked:         0 files
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: Compared:       0 xattrs
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: Compared:       0 files
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: Saved:          0 B
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: Duration:       0.000500 seconds
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: *** Hardlinking files done ***
Jan 26 09:01:07 np0005595444.novalocal dracut[1273]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 26 09:01:08 np0005595444.novalocal kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Jan 26 09:01:08 np0005595444.novalocal kdumpctl[1020]: kdump: Starting kdump: [OK]
Jan 26 09:01:08 np0005595444.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 26 09:01:08 np0005595444.novalocal systemd[1]: Startup finished in 1.708s (kernel) + 2.554s (initrd) + 17.946s (userspace) = 22.209s.
Jan 26 09:01:12 np0005595444.novalocal sshd-session[4319]: Accepted publickey for zuul from 38.102.83.114 port 58676 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 26 09:01:12 np0005595444.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 26 09:01:12 np0005595444.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 26 09:01:12 np0005595444.novalocal systemd-logind[787]: New session 1 of user zuul.
Jan 26 09:01:12 np0005595444.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 26 09:01:12 np0005595444.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Queued start job for default target Main User Target.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Created slice User Application Slice.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Reached target Paths.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Reached target Timers.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Starting D-Bus User Message Bus Socket...
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Starting Create User's Volatile Files and Directories...
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Finished Create User's Volatile Files and Directories.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Listening on D-Bus User Message Bus Socket.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Reached target Sockets.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Reached target Basic System.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Reached target Main User Target.
Jan 26 09:01:12 np0005595444.novalocal systemd[4323]: Startup finished in 134ms.
Jan 26 09:01:12 np0005595444.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 26 09:01:12 np0005595444.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 26 09:01:12 np0005595444.novalocal sshd-session[4319]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:01:13 np0005595444.novalocal python3[4405]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:01:15 np0005595444.novalocal python3[4433]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:01:23 np0005595444.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 09:01:23 np0005595444.novalocal python3[4493]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:01:24 np0005595444.novalocal python3[4533]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 26 09:01:26 np0005595444.novalocal python3[4559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcdeFK+2Uzt/crRIxrw7Ii0Wo86Wha7SQ4BMdscA2exHPGkBWYBIRcQLWh4xXNIJqC/AbzacMgTYrRAOvShuuLUpKyUrfzg1ixfRmJf9fdw2BnSl3RjaKwYMifr2EHSvqhf5bD53uBkC+IdHfTnkuZk6EY16XIhr9eCxuKNHAwKJpnEOyw1gCntHfxFz0wBfy4kv0fT3TjsjCqzDNTzpWx8b5EO9vxnMmoYiZfDcbf2IFeK5LN6O1oAinJsvJV4PpR7ajuvFx5ScMj/FmW42D4VqeCnnHNS5dWt8JHxwY3glRh2xbY1AFfOTDQ7mJSgDV1rY+vTDOxZH3NcovSw7e0hh1Qt3oRYf47AAcmQdH72ljw6N0w34lxQMgBXA4gr6gzREYttTLX3EzRinYa6SypE2Grj5mT9zmv/OvQcULUWVTP443n0NBQIl+NzQqTOwT0s5E1arsVCcgSGTH/tsVlIFM7jffJDMuZorpoMWq/apou6G84JCh7dpggM1MDTCM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:26 np0005595444.novalocal python3[4583]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:27 np0005595444.novalocal python3[4682]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:01:27 np0005595444.novalocal python3[4753]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769418087.115292-251-96901465189581/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ce1a2ab08f8a45f8bb0154795a55a641_id_rsa follow=False checksum=50f004a61600a842dd3e22b3105eef0d4eef20ff backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:28 np0005595444.novalocal python3[4876]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:01:28 np0005595444.novalocal python3[4947]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769418088.107515-306-260459274935622/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ce1a2ab08f8a45f8bb0154795a55a641_id_rsa.pub follow=False checksum=c08207c95d113b3d2dc53dab777685d45917b3fb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:30 np0005595444.novalocal python3[4995]: ansible-ping Invoked with data=pong
Jan 26 09:01:31 np0005595444.novalocal python3[5019]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:01:33 np0005595444.novalocal python3[5077]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 26 09:01:34 np0005595444.novalocal python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:34 np0005595444.novalocal python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:34 np0005595444.novalocal python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:35 np0005595444.novalocal python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:35 np0005595444.novalocal python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:35 np0005595444.novalocal python3[5229]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:37 np0005595444.novalocal sudo[5253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nabbhzhtggbcfljltvvgerwrbdbsxeew ; /usr/bin/python3'
Jan 26 09:01:37 np0005595444.novalocal sudo[5253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:37 np0005595444.novalocal python3[5255]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:37 np0005595444.novalocal sudo[5253]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:38 np0005595444.novalocal sudo[5331]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkiibxhofqrrmpzsaabjkpyaxuneehwt ; /usr/bin/python3'
Jan 26 09:01:38 np0005595444.novalocal sudo[5331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:38 np0005595444.novalocal python3[5333]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:01:38 np0005595444.novalocal sudo[5331]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:38 np0005595444.novalocal sudo[5404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwvpzbrsfifrnyjffnnbwpaqiceivey ; /usr/bin/python3'
Jan 26 09:01:38 np0005595444.novalocal sudo[5404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:38 np0005595444.novalocal python3[5406]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769418097.9003267-31-133524826634999/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:38 np0005595444.novalocal sudo[5404]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:39 np0005595444.novalocal python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:39 np0005595444.novalocal python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:40 np0005595444.novalocal python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:40 np0005595444.novalocal python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:40 np0005595444.novalocal python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:40 np0005595444.novalocal python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:41 np0005595444.novalocal python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:41 np0005595444.novalocal python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:41 np0005595444.novalocal python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:42 np0005595444.novalocal python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:42 np0005595444.novalocal python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:42 np0005595444.novalocal python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:42 np0005595444.novalocal python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:43 np0005595444.novalocal python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:43 np0005595444.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:43 np0005595444.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:44 np0005595444.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:44 np0005595444.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:44 np0005595444.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:44 np0005595444.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:45 np0005595444.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:45 np0005595444.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:45 np0005595444.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:45 np0005595444.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:46 np0005595444.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:46 np0005595444.novalocal python3[6054]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:01:49 np0005595444.novalocal sudo[6078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brwhlyddprjtzjqprebxqedzguirubuy ; /usr/bin/python3'
Jan 26 09:01:49 np0005595444.novalocal sudo[6078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:49 np0005595444.novalocal python3[6080]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 09:01:49 np0005595444.novalocal systemd[1]: Starting Time & Date Service...
Jan 26 09:01:49 np0005595444.novalocal systemd[1]: Started Time & Date Service.
Jan 26 09:01:49 np0005595444.novalocal systemd-timedated[6082]: Changed time zone to 'UTC' (UTC).
Jan 26 09:01:49 np0005595444.novalocal sudo[6078]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:49 np0005595444.novalocal sudo[6109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eegpunimtexwemfjgpgunwzmmgjsuthg ; /usr/bin/python3'
Jan 26 09:01:49 np0005595444.novalocal sudo[6109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:50 np0005595444.novalocal python3[6111]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:50 np0005595444.novalocal sudo[6109]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:50 np0005595444.novalocal python3[6187]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:01:50 np0005595444.novalocal python3[6258]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769418110.2546372-251-43809947582245/source _original_basename=tmpj54uvwb4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:51 np0005595444.novalocal python3[6358]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:01:51 np0005595444.novalocal python3[6429]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769418111.1105926-301-78788538110872/source _original_basename=tmp8vi_rvl3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:52 np0005595444.novalocal sudo[6529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtxbmwmnybzmdqgddqjxyfcoapuakgch ; /usr/bin/python3'
Jan 26 09:01:52 np0005595444.novalocal sudo[6529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:52 np0005595444.novalocal python3[6531]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:01:52 np0005595444.novalocal sudo[6529]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:52 np0005595444.novalocal sudo[6602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tquxtaivvsbxkyydfiqovelwyqgxpztj ; /usr/bin/python3'
Jan 26 09:01:52 np0005595444.novalocal sudo[6602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:53 np0005595444.novalocal python3[6604]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769418112.2437584-381-194865489730471/source _original_basename=tmpt0mnomci follow=False checksum=ec1fff7a2f0c37cc5862f11a9081a375a3f4f428 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:53 np0005595444.novalocal sudo[6602]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:53 np0005595444.novalocal python3[6652]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:01:54 np0005595444.novalocal python3[6678]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:01:54 np0005595444.novalocal sudo[6756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yduezwektzgktlsmafcisdstxjhcunkh ; /usr/bin/python3'
Jan 26 09:01:54 np0005595444.novalocal sudo[6756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:54 np0005595444.novalocal python3[6758]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:01:54 np0005595444.novalocal sudo[6756]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:54 np0005595444.novalocal sudo[6829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqcnolatzrcsqrqkxluldbnnezfbcxlk ; /usr/bin/python3'
Jan 26 09:01:54 np0005595444.novalocal sudo[6829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:55 np0005595444.novalocal python3[6831]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769418114.3539712-451-161542955258129/source _original_basename=tmpwo3yn5q3 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:01:55 np0005595444.novalocal sudo[6829]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:55 np0005595444.novalocal sudo[6880]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enjzbkpdhusfeulsswqrjxwmamlggszh ; /usr/bin/python3'
Jan 26 09:01:55 np0005595444.novalocal sudo[6880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:01:55 np0005595444.novalocal python3[6882]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-0dcc-2282-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:01:55 np0005595444.novalocal sudo[6880]: pam_unix(sudo:session): session closed for user root
Jan 26 09:01:56 np0005595444.novalocal python3[6910]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-0dcc-2282-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 26 09:01:57 np0005595444.novalocal python3[6938]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:02:18 np0005595444.novalocal sudo[6962]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjdrxqggqgfhushfnusprfvirthvnsiv ; /usr/bin/python3'
Jan 26 09:02:18 np0005595444.novalocal sudo[6962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:02:18 np0005595444.novalocal python3[6964]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:02:18 np0005595444.novalocal sudo[6962]: pam_unix(sudo:session): session closed for user root
Jan 26 09:02:19 np0005595444.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 26 09:03:01 np0005595444.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 26 09:03:01 np0005595444.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4186] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 09:03:01 np0005595444.novalocal systemd-udevd[6968]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4338] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4360] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4361] device (eth1): carrier: link connected
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4363] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4367] policy: auto-activating connection 'Wired connection 1' (569a32bb-5b36-37fc-88bb-a15946fda745)
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4369] device (eth1): Activation: starting connection 'Wired connection 1' (569a32bb-5b36-37fc-88bb-a15946fda745)
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4370] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4372] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4374] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:03:01 np0005595444.novalocal NetworkManager[860]: <info>  [1769418181.4377] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:03:02 np0005595444.novalocal python3[6994]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b722-d7c5-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:03:12 np0005595444.novalocal sudo[7072]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcruvtkhxytpdhwvdmimfytazuqrnwvo ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 09:03:12 np0005595444.novalocal sudo[7072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:03:12 np0005595444.novalocal python3[7074]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:03:12 np0005595444.novalocal sudo[7072]: pam_unix(sudo:session): session closed for user root
Jan 26 09:03:12 np0005595444.novalocal sudo[7145]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hljwngvuyusplvgpnpdjllfwzhhxcpja ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 09:03:12 np0005595444.novalocal sudo[7145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:03:12 np0005595444.novalocal python3[7147]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769418192.0053675-104-52432544692766/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=3491b3877d06bc287dbecf187796b167aa784bdc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:03:12 np0005595444.novalocal sudo[7145]: pam_unix(sudo:session): session closed for user root
Jan 26 09:03:13 np0005595444.novalocal sudo[7195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yinmfkjhlezmcdszipoozfzumtqywxop ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 09:03:13 np0005595444.novalocal sudo[7195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:03:13 np0005595444.novalocal python3[7197]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5484] caught SIGTERM, shutting down normally.
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5493] dhcp4 (eth0): canceled DHCP transaction
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5493] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5493] dhcp4 (eth0): state changed no lease
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5495] manager: NetworkManager state is now CONNECTING
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Stopping Network Manager...
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5572] dhcp4 (eth1): canceled DHCP transaction
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5573] dhcp4 (eth1): state changed no lease
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[860]: <info>  [1769418193.5632] exiting (success)
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Stopped Network Manager.
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: NetworkManager.service: Consumed 1.044s CPU time, 9.9M memory peak.
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Starting Network Manager...
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.6172] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:86f8f4d3-c158-4ddc-89d7-e9942bcd416d)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.6176] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.6230] manager[0x55efc2e67000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Starting Hostname Service...
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Started Hostname Service.
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.6999] hostname: hostname: using hostnamed
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7000] hostname: static hostname changed from (none) to "np0005595444.novalocal"
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7004] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7009] manager[0x55efc2e67000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7009] manager[0x55efc2e67000]: rfkill: WWAN hardware radio set enabled
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7033] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7033] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7033] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7034] manager: Networking is enabled by state file
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7036] settings: Loaded settings plugin: keyfile (internal)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7039] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7059] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7066] dhcp: init: Using DHCP client 'internal'
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7068] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7073] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7077] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7084] device (lo): Activation: starting connection 'lo' (4612cff0-21ca-45d4-990a-e6a88a7d7afa)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7090] device (eth0): carrier: link connected
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7093] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7097] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7098] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7103] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7108] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7113] device (eth1): carrier: link connected
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7116] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7120] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (569a32bb-5b36-37fc-88bb-a15946fda745) (indicated)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7120] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7125] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7130] device (eth1): Activation: starting connection 'Wired connection 1' (569a32bb-5b36-37fc-88bb-a15946fda745)
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Started Network Manager.
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7135] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7138] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7140] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7141] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7143] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7146] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7149] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7151] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7153] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7161] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7165] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7171] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7174] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7189] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7190] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7195] device (lo): Activation: successful, device activated.
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7208] dhcp4 (eth0): state changed new lease, address=38.102.83.230
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7212] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 09:03:13 np0005595444.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 26 09:03:13 np0005595444.novalocal sudo[7195]: pam_unix(sudo:session): session closed for user root
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7419] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7442] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7444] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7447] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7457] device (eth0): Activation: successful, device activated.
Jan 26 09:03:13 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418193.7463] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 09:03:14 np0005595444.novalocal python3[7281]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b722-d7c5-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:03:23 np0005595444.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 09:03:43 np0005595444.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.2881] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 09:03:59 np0005595444.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 09:03:59 np0005595444.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3224] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3227] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3234] device (eth1): Activation: successful, device activated.
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3241] manager: startup complete
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3243] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <warn>  [1769418239.3248] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3256] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3412] dhcp4 (eth1): canceled DHCP transaction
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3414] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3415] dhcp4 (eth1): state changed no lease
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3429] policy: auto-activating connection 'ci-private-network' (16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea)
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3434] device (eth1): Activation: starting connection 'ci-private-network' (16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea)
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3436] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3439] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3446] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3455] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3495] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3501] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:03:59 np0005595444.novalocal NetworkManager[7208]: <info>  [1769418239.3504] device (eth1): Activation: successful, device activated.
Jan 26 09:04:08 np0005595444.novalocal systemd[4323]: Starting Mark boot as successful...
Jan 26 09:04:08 np0005595444.novalocal systemd[4323]: Finished Mark boot as successful.
Jan 26 09:04:09 np0005595444.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 09:04:14 np0005595444.novalocal sshd-session[4332]: Received disconnect from 38.102.83.114 port 58676:11: disconnected by user
Jan 26 09:04:14 np0005595444.novalocal sshd-session[4332]: Disconnected from user zuul 38.102.83.114 port 58676
Jan 26 09:04:14 np0005595444.novalocal sshd-session[4319]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:04:14 np0005595444.novalocal systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Jan 26 09:05:17 np0005595444.novalocal sshd-session[7310]: Accepted publickey for zuul from 38.102.83.114 port 52438 ssh2: RSA SHA256:pzGu/8MlhtIDRxsRqlS4AZ6R7CLTQo7Ke10EmY50Qfo
Jan 26 09:05:17 np0005595444.novalocal systemd-logind[787]: New session 3 of user zuul.
Jan 26 09:05:17 np0005595444.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 26 09:05:17 np0005595444.novalocal sshd-session[7310]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:05:17 np0005595444.novalocal sudo[7389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daocxmmtltbmjnvkdwfbzcsernauhmse ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 09:05:17 np0005595444.novalocal sudo[7389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:05:17 np0005595444.novalocal python3[7391]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:05:17 np0005595444.novalocal sudo[7389]: pam_unix(sudo:session): session closed for user root
Jan 26 09:05:17 np0005595444.novalocal sudo[7462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgpgblpkbrgktbulpvcnsmkqfawaygbm ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 26 09:05:17 np0005595444.novalocal sudo[7462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:05:18 np0005595444.novalocal python3[7464]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769418317.4898095-373-217450351274223/source _original_basename=tmpyiieufq5 follow=False checksum=10e957977f1ea6bc363530e08bbb212998c5f9af backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:05:18 np0005595444.novalocal sudo[7462]: pam_unix(sudo:session): session closed for user root
Jan 26 09:05:22 np0005595444.novalocal sshd-session[7313]: Connection closed by 38.102.83.114 port 52438
Jan 26 09:05:22 np0005595444.novalocal sshd-session[7310]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:05:22 np0005595444.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 26 09:05:22 np0005595444.novalocal systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Jan 26 09:05:22 np0005595444.novalocal systemd-logind[787]: Removed session 3.
Jan 26 09:07:08 np0005595444.novalocal systemd[4323]: Created slice User Background Tasks Slice.
Jan 26 09:07:08 np0005595444.novalocal systemd[4323]: Starting Cleanup of User's Temporary Files and Directories...
Jan 26 09:07:08 np0005595444.novalocal systemd[4323]: Finished Cleanup of User's Temporary Files and Directories.
Jan 26 09:09:08 np0005595444.novalocal sshd-session[7494]: Connection closed by 80.94.92.171 port 40310
Jan 26 09:11:16 np0005595444.novalocal sshd-session[7495]: Connection closed by 157.245.76.178 port 56376
Jan 26 09:12:21 np0005595444.novalocal sshd-session[7498]: Accepted publickey for zuul from 38.102.83.114 port 44342 ssh2: RSA SHA256:pzGu/8MlhtIDRxsRqlS4AZ6R7CLTQo7Ke10EmY50Qfo
Jan 26 09:12:21 np0005595444.novalocal systemd-logind[787]: New session 4 of user zuul.
Jan 26 09:12:21 np0005595444.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 26 09:12:21 np0005595444.novalocal sshd-session[7498]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:12:21 np0005595444.novalocal sudo[7525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udcuginivjyrogqpnxbxpqowagxlbclr ; /usr/bin/python3'
Jan 26 09:12:21 np0005595444.novalocal sudo[7525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:21 np0005595444.novalocal python3[7527]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-680e-3fc1-00000000217d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:12:22 np0005595444.novalocal sudo[7525]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:22 np0005595444.novalocal sudo[7554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqobuqwtqcsmvkktkmgfsbzobftptrvv ; /usr/bin/python3'
Jan 26 09:12:22 np0005595444.novalocal sudo[7554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:22 np0005595444.novalocal python3[7556]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:12:22 np0005595444.novalocal sudo[7554]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:22 np0005595444.novalocal sudo[7580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aynwtmnbnexofsctsmahxqkokrgajgdk ; /usr/bin/python3'
Jan 26 09:12:22 np0005595444.novalocal sudo[7580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:23 np0005595444.novalocal python3[7582]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:12:23 np0005595444.novalocal sudo[7580]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:23 np0005595444.novalocal sudo[7606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyekrqqazwpfywlamtsmcrkgayhthsis ; /usr/bin/python3'
Jan 26 09:12:23 np0005595444.novalocal sudo[7606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:23 np0005595444.novalocal python3[7608]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:12:23 np0005595444.novalocal sudo[7606]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:23 np0005595444.novalocal sudo[7632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnuxoqmaguoyaovxetreilgrfqvvphdv ; /usr/bin/python3'
Jan 26 09:12:23 np0005595444.novalocal sudo[7632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:23 np0005595444.novalocal python3[7634]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:12:23 np0005595444.novalocal sudo[7632]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:24 np0005595444.novalocal sudo[7658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnwohtmtocodudsaygblbmpnbiqgtapk ; /usr/bin/python3'
Jan 26 09:12:24 np0005595444.novalocal sudo[7658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:24 np0005595444.novalocal python3[7660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:12:24 np0005595444.novalocal sudo[7658]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:25 np0005595444.novalocal sudo[7736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eesabavlohwfwkcxxotwlnedmmjmxwlq ; /usr/bin/python3'
Jan 26 09:12:25 np0005595444.novalocal sudo[7736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:25 np0005595444.novalocal python3[7738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:12:25 np0005595444.novalocal sudo[7736]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:25 np0005595444.novalocal sudo[7809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkwtojkqcdqxpvzxnbqprqfvrzkehsj ; /usr/bin/python3'
Jan 26 09:12:25 np0005595444.novalocal sudo[7809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:25 np0005595444.novalocal python3[7811]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769418745.0059695-536-141746859900633/source _original_basename=tmp6fol3rfu follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:12:25 np0005595444.novalocal sudo[7809]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:26 np0005595444.novalocal sudo[7859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbicrucfswjnzblzsmgtlirfigvgvkhc ; /usr/bin/python3'
Jan 26 09:12:26 np0005595444.novalocal sudo[7859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:27 np0005595444.novalocal python3[7861]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 09:12:27 np0005595444.novalocal systemd[1]: Reloading.
Jan 26 09:12:27 np0005595444.novalocal systemd-rc-local-generator[7885]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:12:27 np0005595444.novalocal sudo[7859]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:28 np0005595444.novalocal sudo[7916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijvzwbbgjfjgpxwrofqdqvhlxpmcohdm ; /usr/bin/python3'
Jan 26 09:12:28 np0005595444.novalocal sudo[7916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:28 np0005595444.novalocal python3[7918]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 26 09:12:28 np0005595444.novalocal sudo[7916]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:29 np0005595444.novalocal sudo[7942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxnhkhjdbfsqtipeifnudupjnrsnkruf ; /usr/bin/python3'
Jan 26 09:12:29 np0005595444.novalocal sudo[7942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:29 np0005595444.novalocal python3[7944]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:12:29 np0005595444.novalocal sudo[7942]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:29 np0005595444.novalocal sudo[7970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smocnjrhjzeuccxtutdvifdubmdboagn ; /usr/bin/python3'
Jan 26 09:12:29 np0005595444.novalocal sudo[7970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:29 np0005595444.novalocal python3[7972]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:12:29 np0005595444.novalocal sudo[7970]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:29 np0005595444.novalocal sudo[7998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naksnhyjqeqkwccavqrzjitsrfrcufqz ; /usr/bin/python3'
Jan 26 09:12:29 np0005595444.novalocal sudo[7998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:30 np0005595444.novalocal python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:12:30 np0005595444.novalocal sudo[7998]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:30 np0005595444.novalocal sudo[8026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edkgsvwjlogkpuyvbkutgiydtenfqwey ; /usr/bin/python3'
Jan 26 09:12:30 np0005595444.novalocal sudo[8026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:30 np0005595444.novalocal python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:12:30 np0005595444.novalocal sudo[8026]: pam_unix(sudo:session): session closed for user root
Jan 26 09:12:31 np0005595444.novalocal python3[8055]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-680e-3fc1-000000002184-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:12:31 np0005595444.novalocal python3[8085]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:12:34 np0005595444.novalocal sshd-session[7501]: Connection closed by 38.102.83.114 port 44342
Jan 26 09:12:34 np0005595444.novalocal sshd-session[7498]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:12:34 np0005595444.novalocal systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Jan 26 09:12:34 np0005595444.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 26 09:12:34 np0005595444.novalocal systemd[1]: session-4.scope: Consumed 4.306s CPU time.
Jan 26 09:12:34 np0005595444.novalocal systemd-logind[787]: Removed session 4.
Jan 26 09:12:36 np0005595444.novalocal sshd-session[8091]: Accepted publickey for zuul from 38.102.83.114 port 53846 ssh2: RSA SHA256:pzGu/8MlhtIDRxsRqlS4AZ6R7CLTQo7Ke10EmY50Qfo
Jan 26 09:12:36 np0005595444.novalocal systemd-logind[787]: New session 5 of user zuul.
Jan 26 09:12:36 np0005595444.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 26 09:12:36 np0005595444.novalocal sshd-session[8091]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:12:36 np0005595444.novalocal sudo[8118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeuyegwyupqzrbbvewaszebgdeximnpi ; /usr/bin/python3'
Jan 26 09:12:36 np0005595444.novalocal sudo[8118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:12:36 np0005595444.novalocal python3[8120]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 09:12:43 np0005595444.novalocal setsebool[8163]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 26 09:12:43 np0005595444.novalocal setsebool[8163]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:12:55 np0005595444.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  Converting 389 SID table entries...
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:13:06 np0005595444.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:13:24 np0005595444.novalocal dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 26 09:13:24 np0005595444.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:13:24 np0005595444.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:13:24 np0005595444.novalocal systemd[1]: Reloading.
Jan 26 09:13:24 np0005595444.novalocal systemd-rc-local-generator[8936]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:13:24 np0005595444.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:13:25 np0005595444.novalocal sudo[8118]: pam_unix(sudo:session): session closed for user root
Jan 26 09:13:32 np0005595444.novalocal irqbalance[783]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 26 09:13:32 np0005595444.novalocal irqbalance[783]: IRQ 27 affinity is now unmanaged
Jan 26 09:13:33 np0005595444.novalocal python3[14619]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-9270-47f7-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:13:34 np0005595444.novalocal kernel: evm: overlay not supported
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: Starting D-Bus User Message Bus...
Jan 26 09:13:34 np0005595444.novalocal dbus-broker-launch[15041]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 26 09:13:34 np0005595444.novalocal dbus-broker-launch[15041]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: Started D-Bus User Message Bus.
Jan 26 09:13:34 np0005595444.novalocal dbus-broker-lau[15041]: Ready
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: Created slice Slice /user.
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: podman-14962.scope: unit configures an IP firewall, but not running as root.
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: (This warning is only shown for the first unit using IP firewalling.)
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: Started podman-14962.scope.
Jan 26 09:13:34 np0005595444.novalocal systemd[4323]: Started podman-pause-86baa1e3.scope.
Jan 26 09:13:35 np0005595444.novalocal sshd-session[8094]: Connection closed by 38.102.83.114 port 53846
Jan 26 09:13:35 np0005595444.novalocal sshd-session[8091]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:13:35 np0005595444.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 26 09:13:35 np0005595444.novalocal systemd[1]: session-5.scope: Consumed 44.977s CPU time.
Jan 26 09:13:35 np0005595444.novalocal systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Jan 26 09:13:35 np0005595444.novalocal systemd-logind[787]: Removed session 5.
Jan 26 09:13:49 np0005595444.novalocal sshd-session[20759]: Connection closed by 38.102.83.222 port 45136 [preauth]
Jan 26 09:13:49 np0005595444.novalocal sshd-session[20757]: Unable to negotiate with 38.102.83.222 port 45174: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 26 09:13:49 np0005595444.novalocal sshd-session[20763]: Connection closed by 38.102.83.222 port 45120 [preauth]
Jan 26 09:13:49 np0005595444.novalocal sshd-session[20761]: Unable to negotiate with 38.102.83.222 port 45142: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 26 09:13:49 np0005595444.novalocal sshd-session[20762]: Unable to negotiate with 38.102.83.222 port 45158: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 26 09:13:54 np0005595444.novalocal sshd-session[22403]: Accepted publickey for zuul from 38.102.83.114 port 57414 ssh2: RSA SHA256:pzGu/8MlhtIDRxsRqlS4AZ6R7CLTQo7Ke10EmY50Qfo
Jan 26 09:13:54 np0005595444.novalocal systemd-logind[787]: New session 6 of user zuul.
Jan 26 09:13:55 np0005595444.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 26 09:13:55 np0005595444.novalocal sshd-session[22403]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:13:55 np0005595444.novalocal python3[22502]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAtPEUMhsrPMrPBtrW9DN2hphJag2++Oa2+RW/bLZtOuHDUx3O2VLMTPtjIlKZnSfgaaLhnryF0u4SgeMfgHgP8= zuul@np0005595443.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:13:55 np0005595444.novalocal sudo[22647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgdsrdznysamtgskjpaktmgfdhykpfyo ; /usr/bin/python3'
Jan 26 09:13:55 np0005595444.novalocal sudo[22647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:13:55 np0005595444.novalocal python3[22658]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAtPEUMhsrPMrPBtrW9DN2hphJag2++Oa2+RW/bLZtOuHDUx3O2VLMTPtjIlKZnSfgaaLhnryF0u4SgeMfgHgP8= zuul@np0005595443.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:13:55 np0005595444.novalocal sudo[22647]: pam_unix(sudo:session): session closed for user root
Jan 26 09:13:56 np0005595444.novalocal sudo[23006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlxibfcbopazcqjsviapjpcpzwjddeez ; /usr/bin/python3'
Jan 26 09:13:56 np0005595444.novalocal sudo[23006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:13:56 np0005595444.novalocal python3[23012]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005595444.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 26 09:13:56 np0005595444.novalocal useradd[23079]: new group: name=cloud-admin, GID=1002
Jan 26 09:13:56 np0005595444.novalocal useradd[23079]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 26 09:13:56 np0005595444.novalocal sudo[23006]: pam_unix(sudo:session): session closed for user root
Jan 26 09:13:56 np0005595444.novalocal sudo[23195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogalmtikivryqgqifvllxviqrfxmtbeg ; /usr/bin/python3'
Jan 26 09:13:56 np0005595444.novalocal sudo[23195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:13:57 np0005595444.novalocal python3[23203]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAtPEUMhsrPMrPBtrW9DN2hphJag2++Oa2+RW/bLZtOuHDUx3O2VLMTPtjIlKZnSfgaaLhnryF0u4SgeMfgHgP8= zuul@np0005595443.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 09:13:57 np0005595444.novalocal sudo[23195]: pam_unix(sudo:session): session closed for user root
Jan 26 09:13:57 np0005595444.novalocal sudo[23455]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlkcilzdhcaootozqxzqmwuoxbvecpno ; /usr/bin/python3'
Jan 26 09:13:57 np0005595444.novalocal sudo[23455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:13:57 np0005595444.novalocal python3[23462]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:13:57 np0005595444.novalocal sudo[23455]: pam_unix(sudo:session): session closed for user root
Jan 26 09:13:57 np0005595444.novalocal sudo[23678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfysqbjifiudgvtbbrszlrvceborjcwa ; /usr/bin/python3'
Jan 26 09:13:57 np0005595444.novalocal sudo[23678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:13:58 np0005595444.novalocal python3[23687]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769418837.3546255-150-8006939023166/source _original_basename=tmpo3amqbst follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:13:58 np0005595444.novalocal sudo[23678]: pam_unix(sudo:session): session closed for user root
Jan 26 09:13:58 np0005595444.novalocal sudo[23996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbzxcjptqapbomospvwjxnfbedoizxe ; /usr/bin/python3'
Jan 26 09:13:58 np0005595444.novalocal sudo[23996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:13:58 np0005595444.novalocal python3[24005]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 26 09:13:58 np0005595444.novalocal systemd[1]: Starting Hostname Service...
Jan 26 09:13:59 np0005595444.novalocal systemd[1]: Started Hostname Service.
Jan 26 09:13:59 np0005595444.novalocal systemd-hostnamed[24101]: Changed pretty hostname to 'compute-0'
Jan 26 09:13:59 compute-0 systemd-hostnamed[24101]: Hostname set to <compute-0> (static)
Jan 26 09:13:59 compute-0 NetworkManager[7208]: <info>  [1769418839.0989] hostname: static hostname changed from "np0005595444.novalocal" to "compute-0"
Jan 26 09:13:59 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 09:13:59 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 09:13:59 compute-0 sudo[23996]: pam_unix(sudo:session): session closed for user root
Jan 26 09:13:59 compute-0 sshd-session[22450]: Connection closed by 38.102.83.114 port 57414
Jan 26 09:13:59 compute-0 sshd-session[22403]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:13:59 compute-0 systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Jan 26 09:13:59 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 26 09:13:59 compute-0 systemd[1]: session-6.scope: Consumed 2.343s CPU time.
Jan 26 09:13:59 compute-0 systemd-logind[787]: Removed session 6.
Jan 26 09:14:09 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 09:14:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:14:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:14:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 2.958s CPU time.
Jan 26 09:14:18 compute-0 systemd[1]: run-r81b6e6e66f824a848448fb65ba423d1e.service: Deactivated successfully.
Jan 26 09:14:28 compute-0 sshd-session[29923]: Invalid user sol from 80.94.92.171 port 43864
Jan 26 09:14:28 compute-0 sshd-session[29923]: Connection closed by invalid user sol 80.94.92.171 port 43864 [preauth]
Jan 26 09:14:29 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 09:16:08 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 26 09:16:08 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 26 09:16:08 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 26 09:16:08 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 26 09:17:01 compute-0 anacron[2726]: Job `cron.daily' started
Jan 26 09:17:01 compute-0 anacron[2726]: Job `cron.daily' terminated
Jan 26 09:17:58 compute-0 sshd-session[29937]: Invalid user ubuntu from 80.94.92.171 port 46864
Jan 26 09:17:58 compute-0 sshd-session[29937]: Connection closed by invalid user ubuntu 80.94.92.171 port 46864 [preauth]
Jan 26 09:18:03 compute-0 sshd-session[29940]: Accepted publickey for zuul from 38.102.83.222 port 44238 ssh2: RSA SHA256:pzGu/8MlhtIDRxsRqlS4AZ6R7CLTQo7Ke10EmY50Qfo
Jan 26 09:18:03 compute-0 systemd-logind[787]: New session 7 of user zuul.
Jan 26 09:18:03 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 26 09:18:03 compute-0 sshd-session[29940]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:18:04 compute-0 python3[30016]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:18:06 compute-0 sudo[30130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djocansesvkizjonksrjrtcmkbhnhwld ; /usr/bin/python3'
Jan 26 09:18:06 compute-0 sudo[30130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:06 compute-0 python3[30132]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:18:06 compute-0 sudo[30130]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:06 compute-0 sudo[30203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfydxatkqmjbjskkewokctlpirjdrlnt ; /usr/bin/python3'
Jan 26 09:18:06 compute-0 sudo[30203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:06 compute-0 python3[30205]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769419085.8888054-33986-18317071803006/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:18:06 compute-0 sudo[30203]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:06 compute-0 sudo[30229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjdmppowgxgyfdpyyqbujtxgvqanrhbn ; /usr/bin/python3'
Jan 26 09:18:06 compute-0 sudo[30229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:06 compute-0 python3[30231]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:18:07 compute-0 sudo[30229]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:07 compute-0 sudo[30302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grzwqgqjimfiozpzbkbwbnkplwbfjxyj ; /usr/bin/python3'
Jan 26 09:18:07 compute-0 sudo[30302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:07 compute-0 python3[30304]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769419085.8888054-33986-18317071803006/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:18:07 compute-0 sudo[30302]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:07 compute-0 sudo[30328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawntrupvkvfotyylnpdnmhwroxtwvpm ; /usr/bin/python3'
Jan 26 09:18:07 compute-0 sudo[30328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:07 compute-0 python3[30330]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:18:07 compute-0 sudo[30328]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:07 compute-0 sudo[30401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwhtahbnribremamvwurxtqoqlgorlvi ; /usr/bin/python3'
Jan 26 09:18:07 compute-0 sudo[30401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:07 compute-0 python3[30403]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769419085.8888054-33986-18317071803006/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:18:07 compute-0 sudo[30401]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:08 compute-0 sudo[30427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzaswrlpgdcssotymoybtthetzmyzhcy ; /usr/bin/python3'
Jan 26 09:18:08 compute-0 sudo[30427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:08 compute-0 python3[30429]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:18:08 compute-0 sudo[30427]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:08 compute-0 sudo[30500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djfietdfxgpzdybpzygnmondtyjfwtrr ; /usr/bin/python3'
Jan 26 09:18:08 compute-0 sudo[30500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:08 compute-0 python3[30502]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769419085.8888054-33986-18317071803006/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:18:08 compute-0 sudo[30500]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:08 compute-0 sudo[30526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyuprdotusfumzqtfowsxmbnvylufwez ; /usr/bin/python3'
Jan 26 09:18:08 compute-0 sudo[30526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:08 compute-0 python3[30528]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:18:08 compute-0 sudo[30526]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:08 compute-0 sudo[30599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntydfaqhuzmicoexxqidxxacpcssozdt ; /usr/bin/python3'
Jan 26 09:18:08 compute-0 sudo[30599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:09 compute-0 python3[30601]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769419085.8888054-33986-18317071803006/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:18:09 compute-0 sudo[30599]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:09 compute-0 sudo[30625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbyqjemnxbrdflhamopjffhouugqoqtl ; /usr/bin/python3'
Jan 26 09:18:09 compute-0 sudo[30625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:09 compute-0 python3[30627]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:18:09 compute-0 sudo[30625]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:09 compute-0 sudo[30698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvnucqqfhvblvsryortrkszkrglgvly ; /usr/bin/python3'
Jan 26 09:18:09 compute-0 sudo[30698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:09 compute-0 python3[30700]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769419085.8888054-33986-18317071803006/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:18:09 compute-0 sudo[30698]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:09 compute-0 sudo[30724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulavlncgrgbtrzcvwsbmryuulisavvub ; /usr/bin/python3'
Jan 26 09:18:09 compute-0 sudo[30724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:09 compute-0 python3[30726]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:18:09 compute-0 sudo[30724]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:10 compute-0 sudo[30797]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivmsvbdmjovhmhjrhxqmvsaxvgmzgawo ; /usr/bin/python3'
Jan 26 09:18:10 compute-0 sudo[30797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:18:10 compute-0 python3[30799]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769419085.8888054-33986-18317071803006/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:18:10 compute-0 sudo[30797]: pam_unix(sudo:session): session closed for user root
Jan 26 09:18:13 compute-0 sshd-session[30824]: Connection closed by 192.168.122.11 port 55328 [preauth]
Jan 26 09:18:13 compute-0 sshd-session[30825]: Connection closed by 192.168.122.11 port 55342 [preauth]
Jan 26 09:18:13 compute-0 sshd-session[30826]: Unable to negotiate with 192.168.122.11 port 55344: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 26 09:18:13 compute-0 sshd-session[30828]: Unable to negotiate with 192.168.122.11 port 55350: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 26 09:18:13 compute-0 sshd-session[30827]: Unable to negotiate with 192.168.122.11 port 55360: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 26 09:18:22 compute-0 python3[30857]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:19:12 compute-0 sshd-session[30859]: Connection closed by authenticating user root 157.245.76.178 port 41590 [preauth]
Jan 26 09:19:59 compute-0 sshd-session[30863]: Connection closed by authenticating user root 157.245.76.178 port 38980 [preauth]
Jan 26 09:20:46 compute-0 sshd-session[30865]: Connection closed by authenticating user root 157.245.76.178 port 35462 [preauth]
Jan 26 09:21:32 compute-0 sshd-session[30867]: Connection closed by authenticating user root 157.245.76.178 port 36970 [preauth]
Jan 26 09:21:33 compute-0 sshd-session[30869]: Invalid user sol from 80.94.92.171 port 49890
Jan 26 09:21:34 compute-0 sshd-session[30869]: Connection closed by invalid user sol 80.94.92.171 port 49890 [preauth]
Jan 26 09:22:19 compute-0 sshd-session[30871]: Connection closed by authenticating user root 157.245.76.178 port 34890 [preauth]
Jan 26 09:23:05 compute-0 sshd-session[30874]: Connection closed by authenticating user root 157.245.76.178 port 54374 [preauth]
Jan 26 09:23:22 compute-0 sshd-session[29943]: Received disconnect from 38.102.83.222 port 44238:11: disconnected by user
Jan 26 09:23:22 compute-0 sshd-session[29943]: Disconnected from user zuul 38.102.83.222 port 44238
Jan 26 09:23:22 compute-0 sshd-session[29940]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:23:22 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 26 09:23:22 compute-0 systemd[1]: session-7.scope: Consumed 5.033s CPU time.
Jan 26 09:23:22 compute-0 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Jan 26 09:23:22 compute-0 systemd-logind[787]: Removed session 7.
Jan 26 09:23:50 compute-0 sshd-session[30878]: Connection closed by authenticating user root 157.245.76.178 port 38204 [preauth]
Jan 26 09:24:35 compute-0 sshd-session[30880]: Connection closed by authenticating user root 157.245.76.178 port 52464 [preauth]
Jan 26 09:25:01 compute-0 sshd-session[30882]: Invalid user sol from 80.94.92.171 port 52896
Jan 26 09:25:01 compute-0 sshd-session[30882]: Connection closed by invalid user sol 80.94.92.171 port 52896 [preauth]
Jan 26 09:25:21 compute-0 sshd-session[30884]: Connection closed by authenticating user root 157.245.76.178 port 45212 [preauth]
Jan 26 09:26:06 compute-0 sshd-session[30887]: Connection closed by authenticating user root 157.245.76.178 port 56096 [preauth]
Jan 26 09:26:49 compute-0 sshd-session[30889]: Connection closed by authenticating user root 157.245.76.178 port 46494 [preauth]
Jan 26 09:27:33 compute-0 sshd-session[30893]: Connection closed by authenticating user root 157.245.76.178 port 59510 [preauth]
Jan 26 09:28:17 compute-0 sshd-session[30896]: Connection closed by authenticating user root 157.245.76.178 port 41474 [preauth]
Jan 26 09:29:03 compute-0 sshd-session[30898]: Connection closed by authenticating user root 157.245.76.178 port 43018 [preauth]
Jan 26 09:29:30 compute-0 sshd-session[30900]: Accepted publickey for zuul from 192.168.122.30 port 38592 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:29:30 compute-0 systemd-logind[787]: New session 8 of user zuul.
Jan 26 09:29:30 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 26 09:29:30 compute-0 sshd-session[30900]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:29:32 compute-0 python3.9[31053]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:29:33 compute-0 sudo[31232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwdylvkypdlpvabtnzkhyciitlhzprno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419772.6941986-51-58867857613796/AnsiballZ_command.py'
Jan 26 09:29:33 compute-0 sudo[31232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:29:33 compute-0 python3.9[31234]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:29:43 compute-0 sudo[31232]: pam_unix(sudo:session): session closed for user root
Jan 26 09:29:43 compute-0 sshd-session[30903]: Connection closed by 192.168.122.30 port 38592
Jan 26 09:29:43 compute-0 sshd-session[30900]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:29:43 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 26 09:29:43 compute-0 systemd[1]: session-8.scope: Consumed 7.800s CPU time.
Jan 26 09:29:43 compute-0 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Jan 26 09:29:43 compute-0 systemd-logind[787]: Removed session 8.
Jan 26 09:29:48 compute-0 sshd-session[31291]: Connection closed by authenticating user root 157.245.76.178 port 40590 [preauth]
Jan 26 09:29:59 compute-0 sshd-session[31293]: Accepted publickey for zuul from 192.168.122.30 port 59642 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:29:59 compute-0 systemd-logind[787]: New session 9 of user zuul.
Jan 26 09:29:59 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 26 09:29:59 compute-0 sshd-session[31293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:30:00 compute-0 python3.9[31446]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 26 09:30:01 compute-0 python3.9[31620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:30:02 compute-0 sudo[31770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttlmqtrjzeqjdxofriwobphxytzdusch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419801.798684-88-217471327590088/AnsiballZ_command.py'
Jan 26 09:30:02 compute-0 sudo[31770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:02 compute-0 python3.9[31772]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:30:02 compute-0 sudo[31770]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:03 compute-0 sudo[31923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlgsbtckjtgjpmgsfftkhgassppgruyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419802.8041394-124-94351794100517/AnsiballZ_stat.py'
Jan 26 09:30:03 compute-0 sudo[31923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:03 compute-0 python3.9[31925]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:30:03 compute-0 sudo[31923]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:04 compute-0 sudo[32075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yocqsfljqszltsevlevuzvlfwrioxafr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419803.6989553-148-95233136671606/AnsiballZ_file.py'
Jan 26 09:30:04 compute-0 sudo[32075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:04 compute-0 python3.9[32077]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:30:04 compute-0 sudo[32075]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:04 compute-0 sudo[32227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyjtrsjxbpalkvojuxsjbfrfexpcjgce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419804.5384793-172-166212540970277/AnsiballZ_stat.py'
Jan 26 09:30:04 compute-0 sudo[32227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:04 compute-0 python3.9[32229]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:30:04 compute-0 sudo[32227]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:05 compute-0 sudo[32350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzlcfvfjwfqphhfacisifajbwjanfeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419804.5384793-172-166212540970277/AnsiballZ_copy.py'
Jan 26 09:30:05 compute-0 sudo[32350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:05 compute-0 python3.9[32352]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769419804.5384793-172-166212540970277/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:30:05 compute-0 sudo[32350]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:06 compute-0 sudo[32502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dykwwfzshdzcjszgfkbfchfmhindabcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419805.8077626-217-138089748756884/AnsiballZ_setup.py'
Jan 26 09:30:06 compute-0 sudo[32502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:06 compute-0 python3.9[32504]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:30:06 compute-0 sudo[32502]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:06 compute-0 sudo[32658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntzljolkudxrptzcdukgwocxecpueqqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419806.7041934-241-278610672078576/AnsiballZ_file.py'
Jan 26 09:30:06 compute-0 sudo[32658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:07 compute-0 python3.9[32660]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:30:07 compute-0 sudo[32658]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:07 compute-0 sudo[32810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpkhwsfvlqxweyxsmadadpnxnsenzknn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419807.379832-268-192193834090851/AnsiballZ_file.py'
Jan 26 09:30:07 compute-0 sudo[32810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:08 compute-0 python3.9[32812]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:30:08 compute-0 sudo[32810]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:08 compute-0 python3.9[32962]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:30:16 compute-0 python3.9[33216]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:30:16 compute-0 python3.9[33366]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:30:18 compute-0 python3.9[33520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:30:18 compute-0 sudo[33676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmdjcotvmmpsgevlvlkopdsifeojnwxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419818.5684516-412-218395543554070/AnsiballZ_setup.py'
Jan 26 09:30:18 compute-0 sudo[33676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:19 compute-0 python3.9[33678]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:30:19 compute-0 sudo[33676]: pam_unix(sudo:session): session closed for user root
Jan 26 09:30:19 compute-0 sudo[33760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjikhvxuubqhbcrqpoknftvwvwnzllub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419818.5684516-412-218395543554070/AnsiballZ_dnf.py'
Jan 26 09:30:19 compute-0 sudo[33760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:30:20 compute-0 python3.9[33762]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:30:37 compute-0 sshd-session[33848]: Connection closed by authenticating user root 157.245.76.178 port 59276 [preauth]
Jan 26 09:31:06 compute-0 systemd[1]: Reloading.
Jan 26 09:31:06 compute-0 systemd-rc-local-generator[33962]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:31:06 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 26 09:31:06 compute-0 systemd[1]: Reloading.
Jan 26 09:31:07 compute-0 systemd-rc-local-generator[34005]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:31:07 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 26 09:31:07 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 26 09:31:07 compute-0 systemd[1]: Reloading.
Jan 26 09:31:07 compute-0 systemd-rc-local-generator[34043]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:31:07 compute-0 systemd[1]: Starting dnf makecache...
Jan 26 09:31:07 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 26 09:31:07 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 26 09:31:07 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 26 09:31:07 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 26 09:31:07 compute-0 dnf[34053]: Failed determining last makecache time.
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-barbican-42b4c41831408a8e323 166 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 195 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-cinder-1c00d6490d88e436f26ef 188 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-python-stevedore-c4acc5639fd2329372142 183 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-python-cloudkitty-tests-tempest-2c80f8 173 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-os-refresh-config-9bfc52b5049be2d8de61 192 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 197 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-python-designate-tests-tempest-347fdbc 186 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-glance-1fd12c29b339f30fe823e 192 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 184 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-manila-3c01b7181572c95dac462 179 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-python-whitebox-neutron-tests-tempest- 181 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-octavia-ba397f07a7331190208c 162 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-watcher-c014f81a8647287f6dcc 170 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-ansible-config_template-5ccaa22121a7ff 183 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 155 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-swift-dc98a8463506ac520c469a 152 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-python-tempestconf-8515371b7cceebd4282 156 kB/s | 3.0 kB     00:00
Jan 26 09:31:07 compute-0 dnf[34053]: delorean-openstack-heat-ui-013accbfd179753bc3f0 178 kB/s | 3.0 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: CentOS Stream 9 - BaseOS                         28 kB/s | 6.7 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: CentOS Stream 9 - AppStream                      56 kB/s | 6.8 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: CentOS Stream 9 - CRB                            69 kB/s | 6.6 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: CentOS Stream 9 - Extras packages                74 kB/s | 7.3 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: dlrn-antelope-testing                           171 kB/s | 3.0 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: dlrn-antelope-build-deps                        173 kB/s | 3.0 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: centos9-rabbitmq                                141 kB/s | 3.0 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: centos9-storage                                 126 kB/s | 3.0 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: centos9-opstools                                133 kB/s | 3.0 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: NFV SIG OpenvSwitch                             126 kB/s | 3.0 kB     00:00
Jan 26 09:31:08 compute-0 dnf[34053]: repo-setup-centos-appstream                     183 kB/s | 4.4 kB     00:00
Jan 26 09:31:09 compute-0 dnf[34053]: repo-setup-centos-baseos                        148 kB/s | 3.9 kB     00:00
Jan 26 09:31:09 compute-0 dnf[34053]: repo-setup-centos-highavailability              144 kB/s | 3.9 kB     00:00
Jan 26 09:31:09 compute-0 dnf[34053]: repo-setup-centos-powertools                    187 kB/s | 4.3 kB     00:00
Jan 26 09:31:09 compute-0 dnf[34053]: Extra Packages for Enterprise Linux 9 - x86_64  247 kB/s |  31 kB     00:00
Jan 26 09:31:09 compute-0 dnf[34053]: Metadata cache created.
Jan 26 09:31:09 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 26 09:31:09 compute-0 systemd[1]: Finished dnf makecache.
Jan 26 09:31:09 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.682s CPU time.
Jan 26 09:31:25 compute-0 sshd-session[34156]: Connection closed by authenticating user root 157.245.76.178 port 43270 [preauth]
Jan 26 09:32:10 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Jan 26 09:32:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:32:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 09:32:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:32:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:32:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:32:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:32:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:32:10 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 26 09:32:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:32:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:32:11 compute-0 systemd[1]: Reloading.
Jan 26 09:32:11 compute-0 systemd-rc-local-generator[34423]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:32:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:32:11 compute-0 sudo[33760]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:32:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:32:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.066s CPU time.
Jan 26 09:32:12 compute-0 systemd[1]: run-r7bde0f7394644e43b319891871f13c9a.service: Deactivated successfully.
Jan 26 09:32:13 compute-0 sshd-session[35209]: Connection closed by authenticating user root 157.245.76.178 port 58666 [preauth]
Jan 26 09:32:14 compute-0 sudo[35336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkcztgnvdobfnshdovkmupcdfqrfaeby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419934.149943-448-169756168284137/AnsiballZ_command.py'
Jan 26 09:32:14 compute-0 sudo[35336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:14 compute-0 python3.9[35338]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:32:15 compute-0 sudo[35336]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:16 compute-0 sudo[35617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmgqvsqjqpsnbrbyglbgmeekeozdeykj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419935.6740959-472-35207782998928/AnsiballZ_selinux.py'
Jan 26 09:32:16 compute-0 sudo[35617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:16 compute-0 python3.9[35619]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 26 09:32:16 compute-0 sudo[35617]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:17 compute-0 sudo[35769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaivdjqjjshavoztaqpcctkrjwjuxzif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419937.0189846-505-113473076432469/AnsiballZ_command.py'
Jan 26 09:32:17 compute-0 sudo[35769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:17 compute-0 python3.9[35771]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 26 09:32:18 compute-0 sudo[35769]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:19 compute-0 sudo[35922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-katihntyvgrnigbnbyjmcifqpelsarfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419939.5970075-529-240265118623351/AnsiballZ_file.py'
Jan 26 09:32:19 compute-0 sudo[35922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:20 compute-0 python3.9[35924]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:32:20 compute-0 sudo[35922]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:22 compute-0 sudo[36074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrfchjhzqtlnqvoehozqxvgxsfxabwhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419942.5010207-553-200938706902865/AnsiballZ_mount.py'
Jan 26 09:32:22 compute-0 sudo[36074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:23 compute-0 python3.9[36076]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 26 09:32:23 compute-0 sudo[36074]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:24 compute-0 sudo[36226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czavmyhmrdsttibwdkbfnrqkpqogvmuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419944.1798213-637-209259104480413/AnsiballZ_file.py'
Jan 26 09:32:24 compute-0 sudo[36226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:27 compute-0 python3.9[36228]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:32:27 compute-0 sudo[36226]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:27 compute-0 sudo[36378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqaomvirqswvvrjeearrfcolbdvbuozi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419947.4677734-661-146115054497030/AnsiballZ_stat.py'
Jan 26 09:32:27 compute-0 sudo[36378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:27 compute-0 python3.9[36380]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:32:27 compute-0 sudo[36378]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:28 compute-0 sudo[36501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kizatbrteghajpvbsbffmlzewqtxnspk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419947.4677734-661-146115054497030/AnsiballZ_copy.py'
Jan 26 09:32:28 compute-0 sudo[36501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:29 compute-0 python3.9[36503]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769419947.4677734-661-146115054497030/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a4f71bf0609e75a0e091c9100076ae4c4a7bed4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:32:29 compute-0 sudo[36501]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:32 compute-0 sudo[36653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unaugvowovmotkqotrbllfujxyungggq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419951.8487778-733-74171071019829/AnsiballZ_stat.py'
Jan 26 09:32:32 compute-0 sudo[36653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:32 compute-0 python3.9[36655]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:32:32 compute-0 sudo[36653]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:32 compute-0 sudo[36805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upstsrjenbqlzrqejtwxrjqrvjlmglsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419952.5776749-757-63518287113280/AnsiballZ_command.py'
Jan 26 09:32:32 compute-0 sudo[36805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:33 compute-0 python3.9[36807]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:32:33 compute-0 sudo[36805]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:34 compute-0 sudo[36958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiiimjiaimtjcfkorgxatartmkdpjzhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419954.4932897-781-4389702842176/AnsiballZ_file.py'
Jan 26 09:32:34 compute-0 sudo[36958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:34 compute-0 python3.9[36960]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:32:34 compute-0 sudo[36958]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:35 compute-0 sudo[37110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gigfbiezyvahrmhcednmmkfksyboarvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419955.4770105-814-21161747729768/AnsiballZ_getent.py'
Jan 26 09:32:35 compute-0 sudo[37110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:36 compute-0 python3.9[37112]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 26 09:32:36 compute-0 sudo[37110]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:36 compute-0 sudo[37263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtsvojfdmlztzgwvathtbmzwrpqkrge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419956.2941651-838-107716630087202/AnsiballZ_group.py'
Jan 26 09:32:36 compute-0 sudo[37263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:36 compute-0 python3.9[37265]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 09:32:36 compute-0 groupadd[37266]: group added to /etc/group: name=qemu, GID=107
Jan 26 09:32:36 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:32:36 compute-0 groupadd[37266]: group added to /etc/gshadow: name=qemu
Jan 26 09:32:36 compute-0 groupadd[37266]: new group: name=qemu, GID=107
Jan 26 09:32:37 compute-0 sudo[37263]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:37 compute-0 sudo[37422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkipgcnidlformbrzyfbnvwilzcnruev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419957.4961972-862-148603588023596/AnsiballZ_user.py'
Jan 26 09:32:37 compute-0 sudo[37422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:38 compute-0 python3.9[37424]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 09:32:38 compute-0 useradd[37426]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 09:32:38 compute-0 sudo[37422]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:39 compute-0 sudo[37582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtahvfgxshkmyutveaerrhrefvrsxhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419958.8141253-886-213984503605214/AnsiballZ_getent.py'
Jan 26 09:32:39 compute-0 sudo[37582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:39 compute-0 python3.9[37584]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 26 09:32:39 compute-0 sudo[37582]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:39 compute-0 sudo[37735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yszbktljmmckdbjwhaxcruqefswwubth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419959.5105956-910-222024973555853/AnsiballZ_group.py'
Jan 26 09:32:39 compute-0 sudo[37735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:40 compute-0 python3.9[37737]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 09:32:40 compute-0 groupadd[37738]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 26 09:32:40 compute-0 groupadd[37738]: group added to /etc/gshadow: name=hugetlbfs
Jan 26 09:32:40 compute-0 groupadd[37738]: new group: name=hugetlbfs, GID=42477
Jan 26 09:32:40 compute-0 sudo[37735]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:40 compute-0 sudo[37893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxqdajjhonbiukyvkqztjqxiqvufoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419960.4049077-937-111357934888349/AnsiballZ_file.py'
Jan 26 09:32:40 compute-0 sudo[37893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:40 compute-0 python3.9[37895]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 26 09:32:40 compute-0 sudo[37893]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:41 compute-0 sudo[38045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sviavhbqfmrsqxrsvnzunyiazvveksnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419961.3384526-970-29055458036573/AnsiballZ_dnf.py'
Jan 26 09:32:41 compute-0 sudo[38045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:41 compute-0 python3.9[38047]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:32:43 compute-0 sudo[38045]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:45 compute-0 sudo[38198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xehxjudsrjihezzlazqapvfdqzyqppvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419964.9232323-994-53484400155391/AnsiballZ_file.py'
Jan 26 09:32:45 compute-0 sudo[38198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:45 compute-0 python3.9[38200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:32:45 compute-0 sudo[38198]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:45 compute-0 sudo[38350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbcllmiwamzgosggfyarzqkjjbijmgba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419965.6529043-1018-165650176021606/AnsiballZ_stat.py'
Jan 26 09:32:45 compute-0 sudo[38350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:46 compute-0 python3.9[38352]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:32:46 compute-0 sudo[38350]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:46 compute-0 sudo[38473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkmuflmysbqmdsukrubzhxpboupqdvkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419965.6529043-1018-165650176021606/AnsiballZ_copy.py'
Jan 26 09:32:46 compute-0 sudo[38473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:46 compute-0 python3.9[38475]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769419965.6529043-1018-165650176021606/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:32:46 compute-0 sudo[38473]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:47 compute-0 sudo[38625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvjwxyaxziowxbgczkvxhswoobmwdvki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419966.8798711-1063-234920465390023/AnsiballZ_systemd.py'
Jan 26 09:32:47 compute-0 sudo[38625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:47 compute-0 python3.9[38627]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:32:47 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 09:32:47 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 26 09:32:47 compute-0 kernel: Bridge firewalling registered
Jan 26 09:32:47 compute-0 systemd-modules-load[38631]: Inserted module 'br_netfilter'
Jan 26 09:32:47 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 09:32:47 compute-0 sudo[38625]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:48 compute-0 sudo[38785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auicrswbpiclrjdfzsnxqaztrffcucca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419968.126407-1087-14879701717779/AnsiballZ_stat.py'
Jan 26 09:32:48 compute-0 sudo[38785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:48 compute-0 python3.9[38787]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:32:48 compute-0 sudo[38785]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:48 compute-0 sudo[38908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiuzcigbvsmjuelhdydwkhfsnogegydc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419968.126407-1087-14879701717779/AnsiballZ_copy.py'
Jan 26 09:32:48 compute-0 sudo[38908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:49 compute-0 python3.9[38910]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769419968.126407-1087-14879701717779/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:32:49 compute-0 sudo[38908]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:49 compute-0 sudo[39060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxlecyyaghbsyjvqiysllwqxowtsjcue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419969.6061368-1141-168698711390044/AnsiballZ_dnf.py'
Jan 26 09:32:49 compute-0 sudo[39060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:50 compute-0 python3.9[39062]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:32:53 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 26 09:32:53 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 26 09:32:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:32:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:32:54 compute-0 systemd[1]: Reloading.
Jan 26 09:32:54 compute-0 systemd-rc-local-generator[39119]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:32:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:32:54 compute-0 sudo[39060]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:55 compute-0 python3.9[40500]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:32:56 compute-0 python3.9[41503]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 26 09:32:57 compute-0 python3.9[42292]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:32:57 compute-0 sudo[43143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpkipmlypfmejnjemcfnqppmscnqvxpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419977.4476032-1258-267694673955411/AnsiballZ_command.py'
Jan 26 09:32:57 compute-0 sudo[43143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:57 compute-0 python3.9[43150]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:32:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:32:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:32:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.823s CPU time.
Jan 26 09:32:57 compute-0 systemd[1]: run-r4322ed2b5077453ba9d8c5d56b76135d.service: Deactivated successfully.
Jan 26 09:32:58 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 09:32:58 compute-0 systemd[1]: Starting Authorization Manager...
Jan 26 09:32:58 compute-0 polkitd[43452]: Started polkitd version 0.117
Jan 26 09:32:58 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 09:32:58 compute-0 polkitd[43452]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 09:32:58 compute-0 polkitd[43452]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 09:32:58 compute-0 polkitd[43452]: Finished loading, compiling and executing 2 rules
Jan 26 09:32:58 compute-0 polkitd[43452]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 26 09:32:58 compute-0 systemd[1]: Started Authorization Manager.
Jan 26 09:32:58 compute-0 sudo[43143]: pam_unix(sudo:session): session closed for user root
Jan 26 09:32:59 compute-0 sudo[43620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxgpdftucrmdchwbbodzdgvvftbymbaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419978.8552358-1285-28718747186130/AnsiballZ_systemd.py'
Jan 26 09:32:59 compute-0 sudo[43620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:32:59 compute-0 python3.9[43622]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:32:59 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 26 09:32:59 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 26 09:32:59 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 26 09:32:59 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 09:32:59 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 09:32:59 compute-0 sudo[43620]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:00 compute-0 python3.9[43784]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 26 09:33:01 compute-0 sshd-session[43785]: Connection closed by authenticating user root 157.245.76.178 port 40554 [preauth]
Jan 26 09:33:03 compute-0 sudo[43936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drthdlfuwukhypmzxxcxflmfqhuuqfpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419983.3124082-1456-257637517300310/AnsiballZ_systemd.py'
Jan 26 09:33:03 compute-0 sudo[43936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:03 compute-0 python3.9[43938]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:33:03 compute-0 systemd[1]: Reloading.
Jan 26 09:33:04 compute-0 systemd-rc-local-generator[43962]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:33:04 compute-0 sudo[43936]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:04 compute-0 sudo[44124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzgbbmyodynidjkdmtzrpmgorvdzeyug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419984.3945515-1456-108141403966758/AnsiballZ_systemd.py'
Jan 26 09:33:04 compute-0 sudo[44124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:04 compute-0 python3.9[44126]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:33:05 compute-0 systemd[1]: Reloading.
Jan 26 09:33:05 compute-0 systemd-rc-local-generator[44154]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:33:05 compute-0 sudo[44124]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:05 compute-0 sudo[44313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vawfjyznejjbgwwgzenjlavwdldnaqlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419985.5828652-1504-195871570357049/AnsiballZ_command.py'
Jan 26 09:33:05 compute-0 sudo[44313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:06 compute-0 python3.9[44315]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:33:06 compute-0 sudo[44313]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:06 compute-0 sudo[44466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eejeeknnsqwhzbdamenjfbaalaoxrrqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419986.3616323-1528-193062259994518/AnsiballZ_command.py'
Jan 26 09:33:06 compute-0 sudo[44466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:06 compute-0 python3.9[44468]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:33:06 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 26 09:33:06 compute-0 sudo[44466]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:07 compute-0 sudo[44619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piprgyfchrfhjlgvtvahfmvhtirmvkzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419987.1018336-1552-175182624825861/AnsiballZ_command.py'
Jan 26 09:33:07 compute-0 sudo[44619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:07 compute-0 python3.9[44621]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:33:08 compute-0 sudo[44619]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:09 compute-0 sudo[44781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdydottgiwdnyczclxvvzrrrhwpdiynn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419989.2628777-1576-205984719179289/AnsiballZ_command.py'
Jan 26 09:33:09 compute-0 sudo[44781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:09 compute-0 python3.9[44783]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:33:09 compute-0 sudo[44781]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:10 compute-0 sudo[44934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwptezfiloaynqxdhsdatfkvoejvjkgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419989.944905-1600-400245890914/AnsiballZ_systemd.py'
Jan 26 09:33:10 compute-0 sudo[44934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:10 compute-0 python3.9[44936]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:33:10 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 26 09:33:10 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 26 09:33:10 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 26 09:33:10 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 26 09:33:10 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 26 09:33:10 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 26 09:33:10 compute-0 sudo[44934]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:11 compute-0 sshd-session[31296]: Connection closed by 192.168.122.30 port 59642
Jan 26 09:33:11 compute-0 sshd-session[31293]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:33:11 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 26 09:33:11 compute-0 systemd[1]: session-9.scope: Consumed 2min 12.035s CPU time.
Jan 26 09:33:11 compute-0 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Jan 26 09:33:11 compute-0 systemd-logind[787]: Removed session 9.
Jan 26 09:33:12 compute-0 irqbalance[783]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 26 09:33:12 compute-0 irqbalance[783]: IRQ 26 affinity is now unmanaged
Jan 26 09:33:16 compute-0 sshd-session[44966]: Accepted publickey for zuul from 192.168.122.30 port 43918 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:33:16 compute-0 systemd-logind[787]: New session 10 of user zuul.
Jan 26 09:33:16 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 26 09:33:16 compute-0 sshd-session[44966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:33:17 compute-0 python3.9[45119]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:33:18 compute-0 sudo[45273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyreiirhvmkwbzxcyhnwqrcxpfxkbxle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419998.1892047-63-261259931258554/AnsiballZ_getent.py'
Jan 26 09:33:18 compute-0 sudo[45273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:18 compute-0 python3.9[45275]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 26 09:33:18 compute-0 sudo[45273]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:19 compute-0 sudo[45426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izonaoqyjpuvpqbtyiqxxesqqocncrfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769419999.1635368-87-217760219162858/AnsiballZ_group.py'
Jan 26 09:33:19 compute-0 sudo[45426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:19 compute-0 python3.9[45428]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 09:33:19 compute-0 groupadd[45429]: group added to /etc/group: name=openvswitch, GID=42476
Jan 26 09:33:19 compute-0 groupadd[45429]: group added to /etc/gshadow: name=openvswitch
Jan 26 09:33:19 compute-0 groupadd[45429]: new group: name=openvswitch, GID=42476
Jan 26 09:33:19 compute-0 sudo[45426]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:20 compute-0 sudo[45584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeuwjbsemotepcoutvqxrtzhcdrjevnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420000.105389-111-170850354956678/AnsiballZ_user.py'
Jan 26 09:33:20 compute-0 sudo[45584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:20 compute-0 python3.9[45586]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 09:33:20 compute-0 useradd[45588]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 09:33:20 compute-0 useradd[45588]: add 'openvswitch' to group 'hugetlbfs'
Jan 26 09:33:20 compute-0 useradd[45588]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 26 09:33:20 compute-0 sudo[45584]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:21 compute-0 sudo[45744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiqmaotocmdwpbmqxdyhsgcempnzmorm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420001.240134-141-192780534541922/AnsiballZ_setup.py'
Jan 26 09:33:21 compute-0 sudo[45744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:21 compute-0 python3.9[45746]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:33:22 compute-0 sudo[45744]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:22 compute-0 sudo[45828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxpjrdszohwxyzbyexxbcqktgphqgnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420001.240134-141-192780534541922/AnsiballZ_dnf.py'
Jan 26 09:33:22 compute-0 sudo[45828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:22 compute-0 python3.9[45830]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 09:33:24 compute-0 sudo[45828]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:25 compute-0 sudo[45992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzjkoeziqzufrqjhsjybeofdvcfpucsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420005.2383568-183-16103418816080/AnsiballZ_dnf.py'
Jan 26 09:33:25 compute-0 sudo[45992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:25 compute-0 python3.9[45994]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:33:36 compute-0 kernel: SELinux:  Converting 2736 SID table entries...
Jan 26 09:33:36 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:33:36 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 09:33:36 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:33:36 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:33:36 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:33:36 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:33:36 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:33:36 compute-0 groupadd[46017]: group added to /etc/group: name=unbound, GID=994
Jan 26 09:33:36 compute-0 groupadd[46017]: group added to /etc/gshadow: name=unbound
Jan 26 09:33:36 compute-0 groupadd[46017]: new group: name=unbound, GID=994
Jan 26 09:33:36 compute-0 useradd[46024]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 26 09:33:36 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 26 09:33:36 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 26 09:33:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:33:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:33:38 compute-0 systemd[1]: Reloading.
Jan 26 09:33:38 compute-0 systemd-sysv-generator[46523]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:33:38 compute-0 systemd-rc-local-generator[46520]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:33:38 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:33:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:33:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:33:39 compute-0 systemd[1]: run-r51b4323f558c4ad2b528e316a5af1a8d.service: Deactivated successfully.
Jan 26 09:33:39 compute-0 sudo[45992]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:43 compute-0 sudo[47092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuxhefjfsxisinlfmqxbvzrnndxvjjhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420022.5778244-207-40380088776902/AnsiballZ_systemd.py'
Jan 26 09:33:43 compute-0 sudo[47092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:43 compute-0 python3.9[47094]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:33:43 compute-0 systemd[1]: Reloading.
Jan 26 09:33:43 compute-0 systemd-sysv-generator[47128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:33:43 compute-0 systemd-rc-local-generator[47125]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:33:43 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 26 09:33:43 compute-0 chown[47136]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 26 09:33:43 compute-0 ovs-ctl[47141]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 26 09:33:44 compute-0 ovs-ctl[47141]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 26 09:33:44 compute-0 ovs-ctl[47141]: Starting ovsdb-server [  OK  ]
Jan 26 09:33:44 compute-0 ovs-vsctl[47190]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 26 09:33:44 compute-0 ovs-vsctl[47206]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"f90cdfa2-81a1-408b-861e-9121944637ea\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 26 09:33:44 compute-0 ovs-ctl[47141]: Configuring Open vSwitch system IDs [  OK  ]
Jan 26 09:33:44 compute-0 ovs-vsctl[47216]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 26 09:33:44 compute-0 ovs-ctl[47141]: Enabling remote OVSDB managers [  OK  ]
Jan 26 09:33:44 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 26 09:33:44 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 26 09:33:44 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 26 09:33:44 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 26 09:33:44 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 26 09:33:44 compute-0 ovs-ctl[47261]: Inserting openvswitch module [  OK  ]
Jan 26 09:33:44 compute-0 ovs-ctl[47230]: Starting ovs-vswitchd [  OK  ]
Jan 26 09:33:44 compute-0 ovs-vsctl[47279]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 26 09:33:44 compute-0 ovs-ctl[47230]: Enabling remote OVSDB managers [  OK  ]
Jan 26 09:33:44 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 26 09:33:44 compute-0 systemd[1]: Starting Open vSwitch...
Jan 26 09:33:44 compute-0 systemd[1]: Finished Open vSwitch.
Jan 26 09:33:44 compute-0 sudo[47092]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:45 compute-0 python3.9[47430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:33:46 compute-0 sudo[47580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswtcmthvudpeaprlzrxlombsnezqhnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420025.8516295-261-206347441935910/AnsiballZ_sefcontext.py'
Jan 26 09:33:46 compute-0 sudo[47580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:46 compute-0 python3.9[47582]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 26 09:33:47 compute-0 kernel: SELinux:  Converting 2750 SID table entries...
Jan 26 09:33:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:33:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 09:33:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:33:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:33:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:33:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:33:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:33:47 compute-0 sudo[47580]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:48 compute-0 sshd-session[47588]: Connection closed by authenticating user root 157.245.76.178 port 39848 [preauth]
Jan 26 09:33:48 compute-0 python3.9[47739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:33:49 compute-0 sudo[47895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mguyfbhxtdxhnqofiakooxvillsdlzrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420029.340693-315-100176995966772/AnsiballZ_dnf.py'
Jan 26 09:33:49 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 26 09:33:49 compute-0 sudo[47895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:49 compute-0 python3.9[47897]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:33:51 compute-0 sudo[47895]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:51 compute-0 sudo[48048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqitupmoiamfuzkemoddehaowemmqxnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420031.5866609-339-67257069741727/AnsiballZ_command.py'
Jan 26 09:33:51 compute-0 sudo[48048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:52 compute-0 python3.9[48050]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:33:52 compute-0 sudo[48048]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:53 compute-0 sudo[48335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgwyozbnqbbrqtxpjcclfeyhtrgmqvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420033.1284206-363-210991655723377/AnsiballZ_file.py'
Jan 26 09:33:53 compute-0 sudo[48335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:53 compute-0 python3.9[48337]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 26 09:33:53 compute-0 sudo[48335]: pam_unix(sudo:session): session closed for user root
Jan 26 09:33:54 compute-0 python3.9[48487]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:33:55 compute-0 sudo[48639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obkvtoyphmkgijtfkbqvwjqwlwdxyzos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420034.9108348-411-67529989896921/AnsiballZ_dnf.py'
Jan 26 09:33:55 compute-0 sudo[48639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:33:55 compute-0 python3.9[48641]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:33:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:33:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:33:57 compute-0 systemd[1]: Reloading.
Jan 26 09:33:57 compute-0 systemd-rc-local-generator[48679]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:33:57 compute-0 systemd-sysv-generator[48684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:33:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:33:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:33:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:33:57 compute-0 systemd[1]: run-r18771b84cb8643419342f36ea5514f87.service: Deactivated successfully.
Jan 26 09:33:58 compute-0 sudo[48639]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:01 compute-0 sudo[48955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aegxzkhpkjxvvdtanzlurznptmycmxqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420040.8436303-435-192575346704061/AnsiballZ_systemd.py'
Jan 26 09:34:01 compute-0 sudo[48955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:01 compute-0 python3.9[48957]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:34:01 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 26 09:34:01 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 26 09:34:01 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 26 09:34:01 compute-0 NetworkManager[7208]: <info>  [1769420041.5035] caught SIGTERM, shutting down normally.
Jan 26 09:34:01 compute-0 systemd[1]: Stopping Network Manager...
Jan 26 09:34:01 compute-0 NetworkManager[7208]: <info>  [1769420041.5047] dhcp4 (eth0): canceled DHCP transaction
Jan 26 09:34:01 compute-0 NetworkManager[7208]: <info>  [1769420041.5048] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:34:01 compute-0 NetworkManager[7208]: <info>  [1769420041.5048] dhcp4 (eth0): state changed no lease
Jan 26 09:34:01 compute-0 NetworkManager[7208]: <info>  [1769420041.5050] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 09:34:01 compute-0 NetworkManager[7208]: <info>  [1769420041.5135] exiting (success)
Jan 26 09:34:01 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 09:34:01 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 09:34:01 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 26 09:34:01 compute-0 systemd[1]: Stopped Network Manager.
Jan 26 09:34:01 compute-0 systemd[1]: NetworkManager.service: Consumed 12.223s CPU time, 4.1M memory peak, read 0B from disk, written 21.0K to disk.
Jan 26 09:34:01 compute-0 systemd[1]: Starting Network Manager...
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.5886] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:86f8f4d3-c158-4ddc-89d7-e9942bcd416d)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.5888] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.5938] manager[0x563285311000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 09:34:01 compute-0 systemd[1]: Starting Hostname Service...
Jan 26 09:34:01 compute-0 systemd[1]: Started Hostname Service.
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6756] hostname: hostname: using hostnamed
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6758] hostname: static hostname changed from (none) to "compute-0"
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6761] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6765] manager[0x563285311000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6765] manager[0x563285311000]: rfkill: WWAN hardware radio set enabled
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6783] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6790] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6790] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6791] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6791] manager: Networking is enabled by state file
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6793] settings: Loaded settings plugin: keyfile (internal)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6795] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6817] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6825] dhcp: init: Using DHCP client 'internal'
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6827] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6831] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6834] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6840] device (lo): Activation: starting connection 'lo' (4612cff0-21ca-45d4-990a-e6a88a7d7afa)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6845] device (eth0): carrier: link connected
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6849] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6853] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6854] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6858] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6863] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6868] device (eth1): carrier: link connected
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6871] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6875] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea) (indicated)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6876] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6879] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6885] device (eth1): Activation: starting connection 'ci-private-network' (16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea)
Jan 26 09:34:01 compute-0 systemd[1]: Started Network Manager.
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6890] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6896] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6898] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6899] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6901] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6904] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6906] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6908] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6910] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6915] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6918] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6927] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6936] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6950] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6952] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6956] device (lo): Activation: successful, device activated.
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6960] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6961] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6963] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6965] device (eth1): Activation: successful, device activated.
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6969] dhcp4 (eth0): state changed new lease, address=38.102.83.230
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.6975] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.7043] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.7059] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.7061] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.7064] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.7067] device (eth0): Activation: successful, device activated.
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.7072] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 09:34:01 compute-0 NetworkManager[48970]: <info>  [1769420041.7075] manager: startup complete
Jan 26 09:34:01 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 26 09:34:01 compute-0 sudo[48955]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:01 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 26 09:34:02 compute-0 sudo[49181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvmqwodjjhqcgrreqptcwhcvgyifqhxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420041.9372213-459-177251154202597/AnsiballZ_dnf.py'
Jan 26 09:34:02 compute-0 sudo[49181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:02 compute-0 python3.9[49183]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:34:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:34:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:34:06 compute-0 systemd[1]: Reloading.
Jan 26 09:34:06 compute-0 systemd-sysv-generator[49241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:34:06 compute-0 systemd-rc-local-generator[49236]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:34:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:34:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:34:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:34:07 compute-0 systemd[1]: run-rf5a1426a38fa471fac64223270e70a69.service: Deactivated successfully.
Jan 26 09:34:07 compute-0 sudo[49181]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:11 compute-0 sudo[49643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nicluezlbdrzwyutcjwbyknqzyihfuvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420051.4792619-495-12967372298345/AnsiballZ_stat.py'
Jan 26 09:34:11 compute-0 sudo[49643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:11 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 09:34:12 compute-0 python3.9[49645]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:34:12 compute-0 sudo[49643]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:12 compute-0 sudo[49795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhmgxrizpjpxkbofxvlrxpoweyopefcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420052.288438-522-148307176016682/AnsiballZ_ini_file.py'
Jan 26 09:34:12 compute-0 sudo[49795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:12 compute-0 python3.9[49797]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:12 compute-0 sudo[49795]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:13 compute-0 sudo[49949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkqvmbblbydptqaqtnkdnxgjwsctgroo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420053.2599828-552-72404799521500/AnsiballZ_ini_file.py'
Jan 26 09:34:13 compute-0 sudo[49949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:13 compute-0 python3.9[49951]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:13 compute-0 sudo[49949]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:14 compute-0 sudo[50101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffhtxlzeyzqvfaixsmnhokczpeiccfdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420053.8653357-552-213317031552889/AnsiballZ_ini_file.py'
Jan 26 09:34:14 compute-0 sudo[50101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:14 compute-0 python3.9[50103]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:14 compute-0 sudo[50101]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:14 compute-0 sudo[50253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpanzjmntptgqsanalbcacqjagmkwjgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420054.5654087-597-34548693169640/AnsiballZ_ini_file.py'
Jan 26 09:34:14 compute-0 sudo[50253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:15 compute-0 python3.9[50255]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:15 compute-0 sudo[50253]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:15 compute-0 sudo[50405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiuvdzgenkfctteoswwwsecemdrodtth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420055.44193-597-243444795882852/AnsiballZ_ini_file.py'
Jan 26 09:34:15 compute-0 sudo[50405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:15 compute-0 python3.9[50407]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:15 compute-0 sudo[50405]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:16 compute-0 sudo[50557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mspkrtfbsnjcrrixczocmnqqguufovdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420056.1697638-642-82974661958505/AnsiballZ_stat.py'
Jan 26 09:34:16 compute-0 sudo[50557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:16 compute-0 python3.9[50559]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:34:16 compute-0 sudo[50557]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:17 compute-0 sudo[50680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arojjcbwuvixvdrmxkjmjaarjoyqmdmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420056.1697638-642-82974661958505/AnsiballZ_copy.py'
Jan 26 09:34:17 compute-0 sudo[50680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:17 compute-0 python3.9[50682]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420056.1697638-642-82974661958505/.source _original_basename=.k_r1rqgc follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:17 compute-0 sudo[50680]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:17 compute-0 sudo[50832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gocdgzmnovfhmxaboynvoardjqyzkekl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420057.557257-687-193065462615993/AnsiballZ_file.py'
Jan 26 09:34:17 compute-0 sudo[50832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:17 compute-0 python3.9[50834]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:18 compute-0 sudo[50832]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:18 compute-0 sudo[50984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-savwkpflrisekmnmrtuauwqugyuebhue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420058.2088857-711-263401036642427/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 26 09:34:18 compute-0 sudo[50984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:18 compute-0 python3.9[50986]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 26 09:34:18 compute-0 sudo[50984]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:19 compute-0 sudo[51136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yczoekwmgdduwyuxwpxhzphdqermamcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420059.0688384-738-90427730028281/AnsiballZ_file.py'
Jan 26 09:34:19 compute-0 sudo[51136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:19 compute-0 python3.9[51138]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:19 compute-0 sudo[51136]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:20 compute-0 sudo[51288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayliwqloecvlsgqtxwtcnkrthzpdgifb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420060.0638597-768-89995383656284/AnsiballZ_stat.py'
Jan 26 09:34:20 compute-0 sudo[51288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:20 compute-0 sudo[51288]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:20 compute-0 sudo[51411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnzmylvkcjfvxgjtlpbanpljvrihdply ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420060.0638597-768-89995383656284/AnsiballZ_copy.py'
Jan 26 09:34:20 compute-0 sudo[51411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:21 compute-0 sudo[51411]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:21 compute-0 sudo[51563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvkahuralmlnboesovhmksekjekgfcce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420061.3712187-813-251790251067983/AnsiballZ_slurp.py'
Jan 26 09:34:21 compute-0 sudo[51563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:22 compute-0 python3.9[51565]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 26 09:34:22 compute-0 sudo[51563]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:23 compute-0 sudo[51738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joynnxwhylgjmujjtovbxwzuluaobylj ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420062.3351355-840-162480063150249/async_wrapper.py j940556323767 300 /home/zuul/.ansible/tmp/ansible-tmp-1769420062.3351355-840-162480063150249/AnsiballZ_edpm_os_net_config.py _'
Jan 26 09:34:23 compute-0 sudo[51738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:23 compute-0 ansible-async_wrapper.py[51740]: Invoked with j940556323767 300 /home/zuul/.ansible/tmp/ansible-tmp-1769420062.3351355-840-162480063150249/AnsiballZ_edpm_os_net_config.py _
Jan 26 09:34:23 compute-0 ansible-async_wrapper.py[51743]: Starting module and watcher
Jan 26 09:34:23 compute-0 ansible-async_wrapper.py[51743]: Start watching 51744 (300)
Jan 26 09:34:23 compute-0 ansible-async_wrapper.py[51744]: Start module (51744)
Jan 26 09:34:23 compute-0 ansible-async_wrapper.py[51740]: Return async_wrapper task started.
Jan 26 09:34:23 compute-0 sudo[51738]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:23 compute-0 python3.9[51745]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 26 09:34:24 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 26 09:34:24 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 26 09:34:24 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 26 09:34:24 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 26 09:34:24 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.4368] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.4388] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5056] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5059] audit: op="connection-add" uuid="28bdb7ad-92c0-4434-b1f2-4b43b7782cd9" name="br-ex-br" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5078] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5081] audit: op="connection-add" uuid="d1963058-ad28-46b7-8291-e4b1034e923a" name="br-ex-port" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5093] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5094] audit: op="connection-add" uuid="38c8f896-c47c-4e95-af8b-c7f3f635f6ef" name="eth1-port" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5106] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5107] audit: op="connection-add" uuid="025556ea-9ee9-481b-a32e-c2715a7043c4" name="vlan20-port" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5119] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5121] audit: op="connection-add" uuid="12df2468-0dc9-4ee5-a157-656aacfce9c5" name="vlan21-port" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5133] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5135] audit: op="connection-add" uuid="7abbb6ca-d691-42c8-a6a0-57e327312ce9" name="vlan22-port" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5146] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5148] audit: op="connection-add" uuid="6fd71868-d3ee-448e-8329-f901b81148d2" name="vlan23-port" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5169] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5187] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5190] audit: op="connection-add" uuid="a720ca4b-902e-4be7-8c4d-37127e4c56b6" name="br-ex-if" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5567] audit: op="connection-update" uuid="16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea" name="ci-private-network" args="ipv4.dns,ipv4.addresses,ipv4.never-default,ipv4.routing-rules,ipv4.method,ipv4.routes,connection.master,connection.port-type,connection.slave-type,connection.controller,connection.timestamp,ovs-interface.type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.routing-rules,ipv6.method,ipv6.routes,ovs-external-ids.data" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5592] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5594] audit: op="connection-add" uuid="45f4a2e1-5735-4960-b79a-8e75a83c2bad" name="vlan20-if" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5609] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5611] audit: op="connection-add" uuid="18dd4697-eee5-4ae3-9829-e178f53c6dce" name="vlan21-if" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5626] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5628] audit: op="connection-add" uuid="04674f1c-4daf-49d5-ae19-fed8567d021f" name="vlan22-if" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5645] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5647] audit: op="connection-add" uuid="8b5e4d16-992b-427a-a43b-7dc2cffa0086" name="vlan23-if" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5659] audit: op="connection-delete" uuid="569a32bb-5b36-37fc-88bb-a15946fda745" name="Wired connection 1" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5670] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5673] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5679] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5684] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (28bdb7ad-92c0-4434-b1f2-4b43b7782cd9)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5686] audit: op="connection-activate" uuid="28bdb7ad-92c0-4434-b1f2-4b43b7782cd9" name="br-ex-br" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5688] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5689] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5694] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5698] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (d1963058-ad28-46b7-8291-e4b1034e923a)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5700] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5702] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5706] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5711] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (38c8f896-c47c-4e95-af8b-c7f3f635f6ef)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5714] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5715] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5721] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5726] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (025556ea-9ee9-481b-a32e-c2715a7043c4)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5728] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5730] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5735] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5739] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (12df2468-0dc9-4ee5-a157-656aacfce9c5)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5742] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5744] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5748] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5753] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (7abbb6ca-d691-42c8-a6a0-57e327312ce9)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5755] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5756] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5761] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5765] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (6fd71868-d3ee-448e-8329-f901b81148d2)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5767] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5770] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5772] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5778] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5780] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5784] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5788] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a720ca4b-902e-4be7-8c4d-37127e4c56b6)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5790] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5795] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5798] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5801] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5803] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5813] device (eth1): disconnecting for new activation request.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5815] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5817] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5819] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5821] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5823] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5825] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5829] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5833] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (45f4a2e1-5735-4960-b79a-8e75a83c2bad)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5834] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5838] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5840] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5842] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5845] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5847] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5850] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5855] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (18dd4697-eee5-4ae3-9829-e178f53c6dce)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5857] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5860] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5863] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5865] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5868] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5870] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5874] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5879] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (04674f1c-4daf-49d5-ae19-fed8567d021f)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5880] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5885] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5887] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5889] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5893] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <warn>  [1769420065.5895] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5899] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5904] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (8b5e4d16-992b-427a-a43b-7dc2cffa0086)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5906] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5909] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5911] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5913] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5914] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5926] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5928] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5931] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5934] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5940] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5944] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5948] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5951] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5954] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5959] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5963] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5966] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5969] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5974] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5978] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5981] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5985] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5989] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 systemd-udevd[51751]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 09:34:25 compute-0 kernel: Timeout policy base is empty
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.5996] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6000] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6003] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6007] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6012] dhcp4 (eth0): canceled DHCP transaction
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6012] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6012] dhcp4 (eth0): state changed no lease
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6014] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6025] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6029] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51746 uid=0 result="fail" reason="Device is not activated"
Jan 26 09:34:25 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 09:34:25 compute-0 kernel: br-ex: entered promiscuous mode
Jan 26 09:34:25 compute-0 systemd-udevd[51750]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 09:34:25 compute-0 kernel: vlan20: entered promiscuous mode
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6564] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6572] dhcp4 (eth0): state changed new lease, address=38.102.83.230
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6593] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6599] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6609] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.6617] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 26 09:34:25 compute-0 kernel: vlan21: entered promiscuous mode
Jan 26 09:34:25 compute-0 kernel: vlan22: entered promiscuous mode
Jan 26 09:34:25 compute-0 kernel: vlan23: entered promiscuous mode
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7263] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7409] device (eth1): Activation: starting connection 'ci-private-network' (16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7414] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7416] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7417] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7419] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7420] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7422] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7423] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7425] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7437] device (eth1): disconnecting for new activation request.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7438] audit: op="connection-activate" uuid="16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea" name="ci-private-network" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7441] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7460] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7467] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7475] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7477] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7481] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7485] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7488] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7493] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7497] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7501] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7505] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7510] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7515] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7521] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7526] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7531] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7536] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7566] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7576] device (eth1): Activation: starting connection 'ci-private-network' (16e61b0f-2f70-5c5d-a7c3-11c48ea7bbea)
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7579] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51746 uid=0 result="success"
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7583] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7610] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7614] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7623] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7629] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7647] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7655] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7663] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7669] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7683] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7697] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7706] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7712] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7718] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7720] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7728] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7735] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7741] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7747] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7755] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7758] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7760] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7765] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7771] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7776] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7782] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7784] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 09:34:25 compute-0 NetworkManager[48970]: <info>  [1769420065.7789] device (eth1): Activation: successful, device activated.
Jan 26 09:34:26 compute-0 sudo[52106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cddczyuhendascbpczivmokasztzjspl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420066.4693377-840-64195349780453/AnsiballZ_async_status.py'
Jan 26 09:34:26 compute-0 sudo[52106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:26 compute-0 python3.9[52108]: ansible-ansible.legacy.async_status Invoked with jid=j940556323767.51740 mode=status _async_dir=/root/.ansible_async
Jan 26 09:34:26 compute-0 sudo[52106]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:26 compute-0 NetworkManager[48970]: <info>  [1769420066.9700] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51746 uid=0 result="success"
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.2683] checkpoint[0x5632852e7950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.2686] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51746 uid=0 result="success"
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.6490] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51746 uid=0 result="success"
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.6499] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51746 uid=0 result="success"
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.8467] audit: op="networking-control" arg="global-dns-configuration" pid=51746 uid=0 result="success"
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.8602] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.8821] audit: op="networking-control" arg="global-dns-configuration" pid=51746 uid=0 result="success"
Jan 26 09:34:27 compute-0 NetworkManager[48970]: <info>  [1769420067.8849] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51746 uid=0 result="success"
Jan 26 09:34:28 compute-0 NetworkManager[48970]: <info>  [1769420068.0291] checkpoint[0x5632852e7a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 26 09:34:28 compute-0 NetworkManager[48970]: <info>  [1769420068.0294] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51746 uid=0 result="success"
Jan 26 09:34:28 compute-0 ansible-async_wrapper.py[51744]: Module complete (51744)
Jan 26 09:34:28 compute-0 ansible-async_wrapper.py[51743]: Done in kid B.
Jan 26 09:34:30 compute-0 sudo[52212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baufywjxlgyddovspmjtmoljnvzvdrqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420066.4693377-840-64195349780453/AnsiballZ_async_status.py'
Jan 26 09:34:30 compute-0 sudo[52212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:30 compute-0 python3.9[52214]: ansible-ansible.legacy.async_status Invoked with jid=j940556323767.51740 mode=status _async_dir=/root/.ansible_async
Jan 26 09:34:30 compute-0 sudo[52212]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:30 compute-0 sudo[52311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqhizldpashbssqwsgnghhjmpbhaldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420066.4693377-840-64195349780453/AnsiballZ_async_status.py'
Jan 26 09:34:30 compute-0 sudo[52311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:30 compute-0 python3.9[52313]: ansible-ansible.legacy.async_status Invoked with jid=j940556323767.51740 mode=cleanup _async_dir=/root/.ansible_async
Jan 26 09:34:30 compute-0 sudo[52311]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:31 compute-0 sudo[52464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnybhniowzaxxqfbieafsznvyhabewfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420071.1744442-921-188416501911624/AnsiballZ_stat.py'
Jan 26 09:34:31 compute-0 sudo[52464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:31 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 09:34:31 compute-0 python3.9[52466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:34:31 compute-0 sudo[52464]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:32 compute-0 sudo[52589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgxhqwziskfurguihtcoegpamojtutkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420071.1744442-921-188416501911624/AnsiballZ_copy.py'
Jan 26 09:34:32 compute-0 sudo[52589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:32 compute-0 python3.9[52591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420071.1744442-921-188416501911624/.source.returncode _original_basename=.qzu84577 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:32 compute-0 sudo[52589]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:32 compute-0 sudo[52741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfajymuactoojinvelaqtuvtmpezwxwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420072.4579928-969-113345797528067/AnsiballZ_stat.py'
Jan 26 09:34:32 compute-0 sudo[52741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:32 compute-0 python3.9[52743]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:34:32 compute-0 sudo[52741]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:33 compute-0 sudo[52864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxepfgnsbcmrxicakzkcdjmwgregcnwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420072.4579928-969-113345797528067/AnsiballZ_copy.py'
Jan 26 09:34:33 compute-0 sudo[52864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:33 compute-0 python3.9[52866]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420072.4579928-969-113345797528067/.source.cfg _original_basename=.lddr7a9x follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:33 compute-0 sudo[52864]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:33 compute-0 sudo[53017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmyiwtpugqgnblsbsizqykzmxgmntmvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420073.7141101-1014-116233151983592/AnsiballZ_systemd.py'
Jan 26 09:34:33 compute-0 sudo[53017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:34 compute-0 python3.9[53019]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:34:34 compute-0 systemd[1]: Reloading Network Manager...
Jan 26 09:34:34 compute-0 NetworkManager[48970]: <info>  [1769420074.2897] audit: op="reload" arg="0" pid=53023 uid=0 result="success"
Jan 26 09:34:34 compute-0 NetworkManager[48970]: <info>  [1769420074.2903] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 26 09:34:34 compute-0 systemd[1]: Reloaded Network Manager.
Jan 26 09:34:34 compute-0 sudo[53017]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:34 compute-0 sshd-session[44969]: Connection closed by 192.168.122.30 port 43918
Jan 26 09:34:34 compute-0 sshd-session[44966]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:34:34 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 26 09:34:34 compute-0 systemd[1]: session-10.scope: Consumed 48.957s CPU time.
Jan 26 09:34:34 compute-0 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Jan 26 09:34:34 compute-0 systemd-logind[787]: Removed session 10.
Jan 26 09:34:35 compute-0 sshd-session[53052]: Connection closed by authenticating user root 157.245.76.178 port 39266 [preauth]
Jan 26 09:34:40 compute-0 sshd-session[53055]: Accepted publickey for zuul from 192.168.122.30 port 52136 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:34:40 compute-0 systemd-logind[787]: New session 11 of user zuul.
Jan 26 09:34:40 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 26 09:34:40 compute-0 sshd-session[53055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:34:41 compute-0 python3.9[53209]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:34:42 compute-0 python3.9[53363]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:34:44 compute-0 python3.9[53557]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:34:44 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 09:34:45 compute-0 sshd-session[53058]: Connection closed by 192.168.122.30 port 52136
Jan 26 09:34:45 compute-0 sshd-session[53055]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:34:45 compute-0 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Jan 26 09:34:45 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 26 09:34:45 compute-0 systemd[1]: session-11.scope: Consumed 2.104s CPU time.
Jan 26 09:34:45 compute-0 systemd-logind[787]: Removed session 11.
Jan 26 09:34:50 compute-0 sshd-session[53586]: Accepted publickey for zuul from 192.168.122.30 port 55526 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:34:50 compute-0 systemd-logind[787]: New session 12 of user zuul.
Jan 26 09:34:50 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 26 09:34:50 compute-0 sshd-session[53586]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:34:51 compute-0 python3.9[53739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:34:52 compute-0 python3.9[53893]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:34:52 compute-0 sudo[54047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icavuhnzzrgaidygwpznczkscgvyimbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420092.579364-75-250530306831812/AnsiballZ_setup.py'
Jan 26 09:34:52 compute-0 sudo[54047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:53 compute-0 python3.9[54049]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:34:53 compute-0 sudo[54047]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:53 compute-0 sudo[54132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhaknicufkfrnzvaxalmqnatgymnqykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420092.579364-75-250530306831812/AnsiballZ_dnf.py'
Jan 26 09:34:53 compute-0 sudo[54132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:53 compute-0 python3.9[54134]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:34:55 compute-0 sudo[54132]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:55 compute-0 sudo[54285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sywulhrfursslmyiscnuebouowwbltpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420095.5516074-111-128019569608825/AnsiballZ_setup.py'
Jan 26 09:34:55 compute-0 sudo[54285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:56 compute-0 python3.9[54287]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:34:56 compute-0 sudo[54285]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:57 compute-0 sudo[54481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqgmvzzwrrbnpxekzhubcbkncbtmifjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420096.7494504-144-248840492830833/AnsiballZ_file.py'
Jan 26 09:34:57 compute-0 sudo[54481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:57 compute-0 python3.9[54483]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:57 compute-0 sudo[54481]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:57 compute-0 sudo[54633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-innplwqwiuczmnfvvvfbmrqgpyejuist ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420097.5588913-168-208296269271332/AnsiballZ_command.py'
Jan 26 09:34:57 compute-0 sudo[54633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:58 compute-0 python3.9[54635]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:34:58 compute-0 podman[54636]: 2026-01-26 09:34:58.171647524 +0000 UTC m=+0.054208512 system refresh
Jan 26 09:34:58 compute-0 sudo[54633]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:58 compute-0 sudo[54797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrblyzagbwdvhhyrldofgbeehiubexdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420098.463996-192-168258562997528/AnsiballZ_stat.py'
Jan 26 09:34:58 compute-0 sudo[54797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:59 compute-0 python3.9[54799]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:34:59 compute-0 sudo[54797]: pam_unix(sudo:session): session closed for user root
Jan 26 09:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:34:59 compute-0 sudo[54920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhtavmmkmicgxceeiaxvfvwffxbbhqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420098.463996-192-168258562997528/AnsiballZ_copy.py'
Jan 26 09:34:59 compute-0 sudo[54920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:34:59 compute-0 python3.9[54922]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420098.463996-192-168258562997528/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7ae5c880ae9e38e9d079461f8b7ea5caa6251d7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:34:59 compute-0 sudo[54920]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:00 compute-0 sudo[55072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jupkbdqooherhdyqjvfbxshctjtkaeht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420099.94238-237-13447338202214/AnsiballZ_stat.py'
Jan 26 09:35:00 compute-0 sudo[55072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:00 compute-0 python3.9[55074]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:00 compute-0 sudo[55072]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:00 compute-0 sudo[55195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehylvpesholanasbafabgfwrbmzvwkdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420099.94238-237-13447338202214/AnsiballZ_copy.py'
Jan 26 09:35:00 compute-0 sudo[55195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:00 compute-0 python3.9[55197]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769420099.94238-237-13447338202214/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:00 compute-0 sudo[55195]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:01 compute-0 sudo[55347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpipmnfrqykjhoivdxmhctjuvoqhpppb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420101.1789105-285-101685875926655/AnsiballZ_ini_file.py'
Jan 26 09:35:01 compute-0 sudo[55347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:01 compute-0 python3.9[55349]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:01 compute-0 sudo[55347]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:02 compute-0 sudo[55499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnzlinmsgrhmqttpwyleklkldbhdhimn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420101.8897996-285-187299699481082/AnsiballZ_ini_file.py'
Jan 26 09:35:02 compute-0 sudo[55499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:02 compute-0 python3.9[55501]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:02 compute-0 sudo[55499]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:02 compute-0 sudo[55651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifkphvdhuniwfzapbbauetdfwdgxrewn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420102.4234645-285-40065062256554/AnsiballZ_ini_file.py'
Jan 26 09:35:02 compute-0 sudo[55651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:02 compute-0 python3.9[55653]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:02 compute-0 sudo[55651]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:03 compute-0 sudo[55803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmlwrsvozvrlxhgndasgioztmfexdhfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420102.9550214-285-277499980015068/AnsiballZ_ini_file.py'
Jan 26 09:35:03 compute-0 sudo[55803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:03 compute-0 python3.9[55805]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:03 compute-0 sudo[55803]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:04 compute-0 sudo[55955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fadxbxuwsufiwgsabvxwoqpxjmkdlhkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420103.8778565-378-195878600900038/AnsiballZ_dnf.py'
Jan 26 09:35:04 compute-0 sudo[55955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:04 compute-0 python3.9[55957]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:35:05 compute-0 sudo[55955]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:06 compute-0 sudo[56108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctydjsmvvzdcqlgkmpnpvgyagyrjlfer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420106.356106-411-129651082860758/AnsiballZ_setup.py'
Jan 26 09:35:06 compute-0 sudo[56108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:06 compute-0 python3.9[56110]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:35:06 compute-0 sudo[56108]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:07 compute-0 sudo[56262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btwchaqdnsheoydwmwxebjvyqhgjneuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420107.143319-435-48023801152818/AnsiballZ_stat.py'
Jan 26 09:35:07 compute-0 sudo[56262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:07 compute-0 python3.9[56264]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:35:07 compute-0 sudo[56262]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:08 compute-0 sudo[56414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfmevirbjndaggfzqnvqfxazwiujdlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420107.8833594-462-71016758846864/AnsiballZ_stat.py'
Jan 26 09:35:08 compute-0 sudo[56414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:08 compute-0 python3.9[56416]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:35:08 compute-0 sudo[56414]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:08 compute-0 sudo[56566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ratgocyyptibbiontwalxigcybmwzbrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420108.635488-492-168535303073524/AnsiballZ_command.py'
Jan 26 09:35:08 compute-0 sudo[56566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:09 compute-0 python3.9[56568]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:35:09 compute-0 sudo[56566]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:09 compute-0 sudo[56719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woppjdejquduagircwxcmxnstoxourry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420109.4208908-522-34287796001103/AnsiballZ_service_facts.py'
Jan 26 09:35:09 compute-0 sudo[56719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:09 compute-0 python3.9[56721]: ansible-service_facts Invoked
Jan 26 09:35:10 compute-0 network[56738]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:35:10 compute-0 network[56739]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:35:10 compute-0 network[56740]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:35:12 compute-0 sudo[56719]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:14 compute-0 sudo[57023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alzlppbxkdxeqqwyofktlfbjqkqwsbrm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769420114.6154907-567-171711978861047/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769420114.6154907-567-171711978861047/args'
Jan 26 09:35:14 compute-0 sudo[57023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:15 compute-0 sudo[57023]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:15 compute-0 sudo[57190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjurxqcjgksxciyprooelmebkllbmfjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420115.4075506-600-54514055759102/AnsiballZ_dnf.py'
Jan 26 09:35:15 compute-0 sudo[57190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:15 compute-0 python3.9[57192]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:35:17 compute-0 sudo[57190]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:18 compute-0 sudo[57343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrvqokpayrbexckculmksrnctitbczjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420118.099231-639-195351847852269/AnsiballZ_package_facts.py'
Jan 26 09:35:18 compute-0 sudo[57343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:19 compute-0 python3.9[57345]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 26 09:35:19 compute-0 sudo[57343]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:20 compute-0 sudo[57495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftraktabkktjjcmvqbneqhrvgdwueiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420120.011246-669-106785613262325/AnsiballZ_stat.py'
Jan 26 09:35:20 compute-0 sudo[57495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:20 compute-0 python3.9[57497]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:20 compute-0 sudo[57495]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:20 compute-0 sudo[57620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvsaovqvmerobsukqnnoupkemqyzzmub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420120.011246-669-106785613262325/AnsiballZ_copy.py'
Jan 26 09:35:20 compute-0 sudo[57620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:21 compute-0 python3.9[57622]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420120.011246-669-106785613262325/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:21 compute-0 sudo[57620]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:21 compute-0 sudo[57774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmfntqzevdraxdgesuynfuoubplotfso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420121.3502572-714-4132225014869/AnsiballZ_stat.py'
Jan 26 09:35:21 compute-0 sudo[57774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:21 compute-0 python3.9[57776]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:21 compute-0 sudo[57774]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:22 compute-0 sudo[57901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mabpehzubpnfocjcuzmffklykqlsuhay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420121.3502572-714-4132225014869/AnsiballZ_copy.py'
Jan 26 09:35:22 compute-0 sudo[57901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:22 compute-0 python3.9[57903]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420121.3502572-714-4132225014869/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:22 compute-0 sudo[57901]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:22 compute-0 sshd-session[57826]: Connection closed by authenticating user root 157.245.76.178 port 41244 [preauth]
Jan 26 09:35:24 compute-0 sudo[58055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqwaslycaffmrgsyiisqjjppchleiuxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420123.3868349-777-210113016288891/AnsiballZ_lineinfile.py'
Jan 26 09:35:24 compute-0 sudo[58055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:24 compute-0 python3.9[58057]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:24 compute-0 sudo[58055]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:25 compute-0 sudo[58209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwxpgnbaqfljyheeiklxxzxphtxaotqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420125.2579215-822-88889254144953/AnsiballZ_setup.py'
Jan 26 09:35:25 compute-0 sudo[58209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:25 compute-0 python3.9[58211]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:35:26 compute-0 sudo[58209]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:26 compute-0 sudo[58293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xveajwzfqzlhyvuorcnialjdjdvpawvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420125.2579215-822-88889254144953/AnsiballZ_systemd.py'
Jan 26 09:35:26 compute-0 sudo[58293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:26 compute-0 python3.9[58295]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:35:27 compute-0 sudo[58293]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:28 compute-0 sudo[58447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhgyuvrspgkazhvqytkijppeubdkbcjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420128.6618555-870-215612977271833/AnsiballZ_setup.py'
Jan 26 09:35:28 compute-0 sudo[58447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:29 compute-0 python3.9[58449]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:35:29 compute-0 sudo[58447]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:29 compute-0 sudo[58531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvibjdmakuehxcqpziraqinkkwokonlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420128.6618555-870-215612977271833/AnsiballZ_systemd.py'
Jan 26 09:35:29 compute-0 sudo[58531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:29 compute-0 python3.9[58533]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:35:29 compute-0 chronyd[797]: chronyd exiting
Jan 26 09:35:29 compute-0 systemd[1]: Stopping NTP client/server...
Jan 26 09:35:29 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 26 09:35:29 compute-0 systemd[1]: Stopped NTP client/server.
Jan 26 09:35:29 compute-0 systemd[1]: Starting NTP client/server...
Jan 26 09:35:29 compute-0 chronyd[58542]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 26 09:35:29 compute-0 chronyd[58542]: Frequency -26.570 +/- 0.464 ppm read from /var/lib/chrony/drift
Jan 26 09:35:29 compute-0 chronyd[58542]: Loaded seccomp filter (level 2)
Jan 26 09:35:29 compute-0 systemd[1]: Started NTP client/server.
Jan 26 09:35:30 compute-0 sudo[58531]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:30 compute-0 sshd-session[53589]: Connection closed by 192.168.122.30 port 55526
Jan 26 09:35:30 compute-0 sshd-session[53586]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:35:30 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 26 09:35:30 compute-0 systemd[1]: session-12.scope: Consumed 23.980s CPU time.
Jan 26 09:35:30 compute-0 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Jan 26 09:35:30 compute-0 systemd-logind[787]: Removed session 12.
Jan 26 09:35:35 compute-0 sshd-session[58568]: Accepted publickey for zuul from 192.168.122.30 port 36198 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:35:35 compute-0 systemd-logind[787]: New session 13 of user zuul.
Jan 26 09:35:35 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 26 09:35:35 compute-0 sshd-session[58568]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:35:36 compute-0 sudo[58721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdqbuwkubbnzllgfeglbnyegecrznfnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420136.0737965-21-22519800626264/AnsiballZ_file.py'
Jan 26 09:35:36 compute-0 sudo[58721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:36 compute-0 python3.9[58723]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:36 compute-0 sudo[58721]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:37 compute-0 sudo[58873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beqxveziohterfvsqebhnfhujcbfcucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420136.9919088-57-126407562240284/AnsiballZ_stat.py'
Jan 26 09:35:37 compute-0 sudo[58873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:37 compute-0 python3.9[58875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:37 compute-0 sudo[58873]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:38 compute-0 sudo[58996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyqznrwbpywwqobvzinwaslvffcimkro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420136.9919088-57-126407562240284/AnsiballZ_copy.py'
Jan 26 09:35:38 compute-0 sudo[58996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:38 compute-0 python3.9[58998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420136.9919088-57-126407562240284/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:38 compute-0 sudo[58996]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:38 compute-0 sshd-session[58571]: Connection closed by 192.168.122.30 port 36198
Jan 26 09:35:38 compute-0 sshd-session[58568]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:35:38 compute-0 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Jan 26 09:35:38 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 26 09:35:38 compute-0 systemd[1]: session-13.scope: Consumed 1.583s CPU time.
Jan 26 09:35:38 compute-0 systemd-logind[787]: Removed session 13.
Jan 26 09:35:45 compute-0 sshd-session[59023]: Accepted publickey for zuul from 192.168.122.30 port 58960 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:35:45 compute-0 systemd-logind[787]: New session 14 of user zuul.
Jan 26 09:35:45 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 26 09:35:45 compute-0 sshd-session[59023]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:35:46 compute-0 python3.9[59176]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:35:47 compute-0 sudo[59330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkegpobgyzmqamenktzbsevckxmhauci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420146.7808738-54-8141215284769/AnsiballZ_file.py'
Jan 26 09:35:47 compute-0 sudo[59330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:47 compute-0 python3.9[59332]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:47 compute-0 sudo[59330]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:48 compute-0 sudo[59505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npmwcplhaqreylhbqobzacejncvsgiyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420147.6495273-78-80984566069434/AnsiballZ_stat.py'
Jan 26 09:35:48 compute-0 sudo[59505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:48 compute-0 python3.9[59507]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:48 compute-0 sudo[59505]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:49 compute-0 sudo[59628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykyywkpudzvieluqrddnhypkftfgqdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420147.6495273-78-80984566069434/AnsiballZ_copy.py'
Jan 26 09:35:49 compute-0 sudo[59628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:49 compute-0 python3.9[59630]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769420147.6495273-78-80984566069434/.source.json _original_basename=.bx28ynsl follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:49 compute-0 sudo[59628]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:50 compute-0 sudo[59780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwplezvlhqzpxzesdiksxgsmrhchkfwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420150.1987646-147-73655918633795/AnsiballZ_stat.py'
Jan 26 09:35:50 compute-0 sudo[59780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:50 compute-0 python3.9[59782]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:50 compute-0 sudo[59780]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:51 compute-0 sudo[59903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izkthlnyeivzfijtjzyptupostodqofn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420150.1987646-147-73655918633795/AnsiballZ_copy.py'
Jan 26 09:35:51 compute-0 sudo[59903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:51 compute-0 python3.9[59905]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420150.1987646-147-73655918633795/.source _original_basename=.ol9a1cjw follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:51 compute-0 sudo[59903]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:51 compute-0 sudo[60055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvqdvhhznepilhyeozjwxtyuayempxdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420151.549267-195-234996760739561/AnsiballZ_file.py'
Jan 26 09:35:51 compute-0 sudo[60055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:52 compute-0 python3.9[60057]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:52 compute-0 sudo[60055]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:52 compute-0 sudo[60207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fretjiwgyinrhpusshznsetulniubjby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420152.2679334-219-63742648128163/AnsiballZ_stat.py'
Jan 26 09:35:52 compute-0 sudo[60207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:52 compute-0 python3.9[60209]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:52 compute-0 sudo[60207]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:53 compute-0 sudo[60330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqraglknejuocauoifnqwnbmmeszdnte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420152.2679334-219-63742648128163/AnsiballZ_copy.py'
Jan 26 09:35:53 compute-0 sudo[60330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:53 compute-0 python3.9[60333]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769420152.2679334-219-63742648128163/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:53 compute-0 sudo[60330]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:53 compute-0 sudo[60483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phugwpyrjkiivlljpqpcwvspbzqdukgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420153.4362118-219-87935849486827/AnsiballZ_stat.py'
Jan 26 09:35:53 compute-0 sudo[60483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:53 compute-0 python3.9[60485]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:53 compute-0 sudo[60483]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:54 compute-0 sudo[60606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpgoezrvvarsybzbyyepgxlodfpdxwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420153.4362118-219-87935849486827/AnsiballZ_copy.py'
Jan 26 09:35:54 compute-0 sudo[60606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:54 compute-0 python3.9[60608]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769420153.4362118-219-87935849486827/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:35:54 compute-0 sudo[60606]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:54 compute-0 sudo[60758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npfdxwbnsnuvrpynehltefwsaktkkgrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420154.551203-306-75045651500531/AnsiballZ_file.py'
Jan 26 09:35:54 compute-0 sudo[60758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:54 compute-0 python3.9[60760]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:55 compute-0 sudo[60758]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:55 compute-0 sudo[60910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izbzqbqjiluuwbtjjmquwrswhlxahgvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420155.2240314-330-15592409440402/AnsiballZ_stat.py'
Jan 26 09:35:55 compute-0 sudo[60910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:55 compute-0 python3.9[60912]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:55 compute-0 sudo[60910]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:56 compute-0 sudo[61033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xalgaohztqiwlllsdczmtjwqwlylempp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420155.2240314-330-15592409440402/AnsiballZ_copy.py'
Jan 26 09:35:56 compute-0 sudo[61033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:56 compute-0 python3.9[61035]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420155.2240314-330-15592409440402/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:56 compute-0 sudo[61033]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:56 compute-0 sudo[61185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trlagyhdhwvxvckefvvkffromytzvlqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420156.415065-375-69071688481016/AnsiballZ_stat.py'
Jan 26 09:35:56 compute-0 sudo[61185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:56 compute-0 python3.9[61187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:56 compute-0 sudo[61185]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:57 compute-0 sudo[61308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzgzdrhwukqmwlkutyruocnmmwvsbply ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420156.415065-375-69071688481016/AnsiballZ_copy.py'
Jan 26 09:35:57 compute-0 sudo[61308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:57 compute-0 python3.9[61310]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420156.415065-375-69071688481016/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:35:57 compute-0 sudo[61308]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:58 compute-0 sudo[61460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwmumfoapikoawrfzwtgtlcpwetczzxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420157.637435-420-238259820435729/AnsiballZ_systemd.py'
Jan 26 09:35:58 compute-0 sudo[61460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:58 compute-0 python3.9[61462]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:35:58 compute-0 systemd[1]: Reloading.
Jan 26 09:35:58 compute-0 systemd-rc-local-generator[61488]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:35:58 compute-0 systemd-sysv-generator[61491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:35:58 compute-0 systemd[1]: Reloading.
Jan 26 09:35:58 compute-0 systemd-sysv-generator[61527]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:35:58 compute-0 systemd-rc-local-generator[61523]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:35:59 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 26 09:35:59 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 26 09:35:59 compute-0 sudo[61460]: pam_unix(sudo:session): session closed for user root
Jan 26 09:35:59 compute-0 sudo[61688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgurolctfozedafvtkqugxqxofimjryy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420159.3835464-444-169023543941037/AnsiballZ_stat.py'
Jan 26 09:35:59 compute-0 sudo[61688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:35:59 compute-0 python3.9[61690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:35:59 compute-0 sudo[61688]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:00 compute-0 sudo[61811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segdotfimzyhnnkjpspndicmmaljwksw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420159.3835464-444-169023543941037/AnsiballZ_copy.py'
Jan 26 09:36:00 compute-0 sudo[61811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:00 compute-0 python3.9[61813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420159.3835464-444-169023543941037/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:00 compute-0 sudo[61811]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:00 compute-0 sudo[61963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byvbzgpqgruzrcrywrjklwcuurmludah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420160.55939-489-170631999853911/AnsiballZ_stat.py'
Jan 26 09:36:00 compute-0 sudo[61963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:01 compute-0 python3.9[61965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:01 compute-0 sudo[61963]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:01 compute-0 sudo[62086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ughfhvgsrkdryszkelnkylbanpqnhpyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420160.55939-489-170631999853911/AnsiballZ_copy.py'
Jan 26 09:36:01 compute-0 sudo[62086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:01 compute-0 python3.9[62088]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420160.55939-489-170631999853911/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:01 compute-0 sudo[62086]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:02 compute-0 sudo[62238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmrlliqlxngxtmorhxymuhtadyoindky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420161.7387052-534-228833204685194/AnsiballZ_systemd.py'
Jan 26 09:36:02 compute-0 sudo[62238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:02 compute-0 python3.9[62240]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:36:02 compute-0 systemd[1]: Reloading.
Jan 26 09:36:02 compute-0 systemd-rc-local-generator[62265]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:36:02 compute-0 systemd-sysv-generator[62268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:36:02 compute-0 systemd[1]: Reloading.
Jan 26 09:36:02 compute-0 systemd-rc-local-generator[62306]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:36:02 compute-0 systemd-sysv-generator[62310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:36:02 compute-0 systemd[1]: Starting Create netns directory...
Jan 26 09:36:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 09:36:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 09:36:02 compute-0 systemd[1]: Finished Create netns directory.
Jan 26 09:36:02 compute-0 sudo[62238]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:03 compute-0 python3.9[62467]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:36:03 compute-0 network[62484]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:36:03 compute-0 network[62485]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:36:03 compute-0 network[62486]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:36:07 compute-0 sudo[62746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhbckvlbjpxyjqyzfoontqtmxekaaoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420167.531698-582-81029189830972/AnsiballZ_systemd.py'
Jan 26 09:36:07 compute-0 sudo[62746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:08 compute-0 python3.9[62748]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:36:08 compute-0 systemd[1]: Reloading.
Jan 26 09:36:08 compute-0 systemd-rc-local-generator[62778]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:36:08 compute-0 systemd-sysv-generator[62781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:36:08 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 26 09:36:08 compute-0 iptables.init[62788]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 26 09:36:08 compute-0 iptables.init[62788]: iptables: Flushing firewall rules: [  OK  ]
Jan 26 09:36:08 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 26 09:36:08 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 26 09:36:08 compute-0 sudo[62746]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:08 compute-0 sshd-session[62807]: Connection closed by authenticating user root 157.245.76.178 port 44258 [preauth]
Jan 26 09:36:09 compute-0 sudo[62984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsthdgeivxgomjwtvezjxqilkhsubhwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420168.9684553-582-112588228399953/AnsiballZ_systemd.py'
Jan 26 09:36:09 compute-0 sudo[62984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:09 compute-0 python3.9[62986]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:36:09 compute-0 sudo[62984]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:10 compute-0 sudo[63138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elxdfaohudizfsetovfbwsngdfmtrhtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420169.8810458-630-91657709761810/AnsiballZ_systemd.py'
Jan 26 09:36:10 compute-0 sudo[63138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:10 compute-0 python3.9[63140]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:36:10 compute-0 systemd[1]: Reloading.
Jan 26 09:36:10 compute-0 systemd-rc-local-generator[63166]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:36:10 compute-0 systemd-sysv-generator[63169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:36:10 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 26 09:36:10 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 26 09:36:10 compute-0 sudo[63138]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:11 compute-0 sudo[63331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbbkhzulvwoawomqliucjriwovrhooop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420170.9457042-654-56154138706291/AnsiballZ_command.py'
Jan 26 09:36:11 compute-0 sudo[63331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:11 compute-0 python3.9[63333]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:36:11 compute-0 sudo[63331]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:12 compute-0 sudo[63484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zleemlifktlvmwsyfqzpbqtgbvawfaca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420172.159789-696-25813342381041/AnsiballZ_stat.py'
Jan 26 09:36:12 compute-0 sudo[63484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:12 compute-0 python3.9[63486]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:12 compute-0 sudo[63484]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:13 compute-0 sudo[63609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkviupcqpelmhjohxnqcemnimlgawwtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420172.159789-696-25813342381041/AnsiballZ_copy.py'
Jan 26 09:36:13 compute-0 sudo[63609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:13 compute-0 python3.9[63611]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420172.159789-696-25813342381041/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:13 compute-0 sudo[63609]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:13 compute-0 sudo[63762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yosmdhpfniqatrqkffqwsrtpxtsxjufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420173.5587783-741-43841913141696/AnsiballZ_systemd.py'
Jan 26 09:36:13 compute-0 sudo[63762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:14 compute-0 python3.9[63764]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:36:14 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 26 09:36:14 compute-0 sshd[1008]: Received SIGHUP; restarting.
Jan 26 09:36:14 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 26 09:36:14 compute-0 sshd[1008]: Server listening on :: port 22.
Jan 26 09:36:14 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 26 09:36:14 compute-0 sudo[63762]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:14 compute-0 sudo[63918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxjvkswgsmtvzjadesdijyepnjuydmbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420174.467338-765-30622893846055/AnsiballZ_file.py'
Jan 26 09:36:14 compute-0 sudo[63918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:15 compute-0 python3.9[63920]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:15 compute-0 sudo[63918]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:15 compute-0 sudo[64070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szcxvttykdjwyprjmlrfwdspxvrskemb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420175.2072976-789-243877348009783/AnsiballZ_stat.py'
Jan 26 09:36:15 compute-0 sudo[64070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:15 compute-0 python3.9[64072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:15 compute-0 sudo[64070]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:16 compute-0 sudo[64193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rydocnumwcvyidmzltzcvzkakvkjyinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420175.2072976-789-243877348009783/AnsiballZ_copy.py'
Jan 26 09:36:16 compute-0 sudo[64193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:16 compute-0 python3.9[64195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420175.2072976-789-243877348009783/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:16 compute-0 sudo[64193]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:17 compute-0 sudo[64345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqcrfqqrxedxabwukqswgbmrwjkxcie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420176.7880058-843-278154972176703/AnsiballZ_timezone.py'
Jan 26 09:36:17 compute-0 sudo[64345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:17 compute-0 python3.9[64347]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 09:36:17 compute-0 systemd[1]: Starting Time & Date Service...
Jan 26 09:36:17 compute-0 systemd[1]: Started Time & Date Service.
Jan 26 09:36:17 compute-0 sudo[64345]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:18 compute-0 sudo[64501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpulibqvkahmqetdhooaezilesrzvygf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420177.8666368-870-132296945567277/AnsiballZ_file.py'
Jan 26 09:36:18 compute-0 sudo[64501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:18 compute-0 python3.9[64503]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:18 compute-0 sudo[64501]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:18 compute-0 sudo[64653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olkqdsgcepofsuawfdoncowwutvplebp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420178.5766344-894-6865632727450/AnsiballZ_stat.py'
Jan 26 09:36:18 compute-0 sudo[64653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:19 compute-0 python3.9[64655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:19 compute-0 sudo[64653]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:19 compute-0 sudo[64776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwwzgrwvfkjrhgqcszthtrwovbusgkwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420178.5766344-894-6865632727450/AnsiballZ_copy.py'
Jan 26 09:36:19 compute-0 sudo[64776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:19 compute-0 python3.9[64778]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420178.5766344-894-6865632727450/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:19 compute-0 sudo[64776]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:20 compute-0 sudo[64928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwuoqvanvwzzlqhudckitmiycoxkpkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420179.858486-939-13608306111562/AnsiballZ_stat.py'
Jan 26 09:36:20 compute-0 sudo[64928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:20 compute-0 python3.9[64930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:20 compute-0 sudo[64928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:20 compute-0 sudo[65051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adkvlmrrshpfevjvfrtlsifigwewnvlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420179.858486-939-13608306111562/AnsiballZ_copy.py'
Jan 26 09:36:20 compute-0 sudo[65051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:20 compute-0 python3.9[65053]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420179.858486-939-13608306111562/.source.yaml _original_basename=.ik4_lvmp follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:20 compute-0 sudo[65051]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:21 compute-0 sudo[65203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozviqchfpegpkfcysxjrbsjewkzzgfmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420181.1201203-984-129968573618375/AnsiballZ_stat.py'
Jan 26 09:36:21 compute-0 sudo[65203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:21 compute-0 python3.9[65205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:21 compute-0 sudo[65203]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:21 compute-0 sudo[65326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkyxglfgnccjwimujayeeyydyggbqovy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420181.1201203-984-129968573618375/AnsiballZ_copy.py'
Jan 26 09:36:21 compute-0 sudo[65326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:22 compute-0 python3.9[65328]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420181.1201203-984-129968573618375/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:22 compute-0 sudo[65326]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:22 compute-0 sudo[65478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esiytvqgrcmsvtcpsaefwuvotljwimrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420182.3258638-1029-157110367331262/AnsiballZ_command.py'
Jan 26 09:36:22 compute-0 sudo[65478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:22 compute-0 python3.9[65480]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:36:22 compute-0 sudo[65478]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:23 compute-0 sudo[65631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpddevuuorhbcomttahtibidqihqasco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420183.103572-1053-204055941808211/AnsiballZ_command.py'
Jan 26 09:36:23 compute-0 sudo[65631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:23 compute-0 python3.9[65633]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:36:23 compute-0 sudo[65631]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:24 compute-0 sudo[65784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctbxjecfhqtqbfjwmqfbadgezmgfwnvn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769420183.74324-1077-10520308087819/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 09:36:24 compute-0 sudo[65784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:24 compute-0 python3[65786]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 09:36:24 compute-0 sudo[65784]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:24 compute-0 sudo[65936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyjokcrcndbftfyphudojrfhouyszrnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420184.5072768-1101-224220289665712/AnsiballZ_stat.py'
Jan 26 09:36:24 compute-0 sudo[65936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:25 compute-0 python3.9[65938]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:25 compute-0 sudo[65936]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:25 compute-0 sudo[66059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nagbmwyfyreuzafedtqqafvxqzodpbqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420184.5072768-1101-224220289665712/AnsiballZ_copy.py'
Jan 26 09:36:25 compute-0 sudo[66059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:25 compute-0 python3.9[66061]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420184.5072768-1101-224220289665712/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:25 compute-0 sudo[66059]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:26 compute-0 sudo[66211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azwythvrgnsypkshldpwzbgovlaadftr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420185.8112607-1146-153621317223505/AnsiballZ_stat.py'
Jan 26 09:36:26 compute-0 sudo[66211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:26 compute-0 python3.9[66213]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:26 compute-0 sudo[66211]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:26 compute-0 sudo[66334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrprgpnizydawiwuubgfiepqavhadsuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420185.8112607-1146-153621317223505/AnsiballZ_copy.py'
Jan 26 09:36:26 compute-0 sudo[66334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:27 compute-0 python3.9[66336]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420185.8112607-1146-153621317223505/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:27 compute-0 sudo[66334]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:27 compute-0 sudo[66486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htcrrcghexxnltkftexhmetcdatacfvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420187.2882962-1191-26182450820000/AnsiballZ_stat.py'
Jan 26 09:36:27 compute-0 sudo[66486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:27 compute-0 python3.9[66488]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:27 compute-0 sudo[66486]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:28 compute-0 sudo[66609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaemmwdowqbhgqpsxogwcwnjlabxvuvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420187.2882962-1191-26182450820000/AnsiballZ_copy.py'
Jan 26 09:36:28 compute-0 sudo[66609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:28 compute-0 python3.9[66611]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420187.2882962-1191-26182450820000/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:28 compute-0 sudo[66609]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:28 compute-0 sudo[66761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtwftybueeaozddyyepnqofmszuulxfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420188.529665-1236-112173236989091/AnsiballZ_stat.py'
Jan 26 09:36:28 compute-0 sudo[66761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:29 compute-0 python3.9[66763]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:29 compute-0 sudo[66761]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:29 compute-0 sudo[66884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcsqsszvmwcamkurbqlqpiigktrmxwvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420188.529665-1236-112173236989091/AnsiballZ_copy.py'
Jan 26 09:36:29 compute-0 sudo[66884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:29 compute-0 python3.9[66886]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420188.529665-1236-112173236989091/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:29 compute-0 sudo[66884]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:30 compute-0 sudo[67036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnbwwpsvcmfywsvqogrwpxehhixzwmcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420189.8080008-1281-261000073918570/AnsiballZ_stat.py'
Jan 26 09:36:30 compute-0 sudo[67036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:30 compute-0 python3.9[67038]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:36:30 compute-0 sudo[67036]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:30 compute-0 sudo[67159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefscmwpgmsqganfyftyxsrrdvrvegay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420189.8080008-1281-261000073918570/AnsiballZ_copy.py'
Jan 26 09:36:30 compute-0 sudo[67159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:30 compute-0 python3.9[67161]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420189.8080008-1281-261000073918570/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:30 compute-0 sudo[67159]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:31 compute-0 sudo[67311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uagocsjvltzidfbpgnsjpzizutcmbvxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420191.2512362-1326-163956557011412/AnsiballZ_file.py'
Jan 26 09:36:31 compute-0 sudo[67311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:31 compute-0 python3.9[67313]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:31 compute-0 sudo[67311]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:32 compute-0 sudo[67463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxbksxsqgxvzrekxoiticoezmcyqznno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420191.9363286-1350-85836243157480/AnsiballZ_command.py'
Jan 26 09:36:32 compute-0 sudo[67463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:32 compute-0 python3.9[67465]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:36:32 compute-0 sudo[67463]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:33 compute-0 sudo[67622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwlzmghxzxqhvcshvynvsuapndttyxim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420192.6452792-1374-80412173116901/AnsiballZ_blockinfile.py'
Jan 26 09:36:33 compute-0 sudo[67622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:33 compute-0 python3.9[67624]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:33 compute-0 sudo[67622]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:33 compute-0 sudo[67775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tufcsunebtxgklyxnmaazjzaroltqbhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420193.633398-1401-269466037333993/AnsiballZ_file.py'
Jan 26 09:36:33 compute-0 sudo[67775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:34 compute-0 python3.9[67777]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:34 compute-0 sudo[67775]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:34 compute-0 sudo[67927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekkvorzuhqoedpzmpafpngpjwphnwuqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420194.322661-1401-255073025430740/AnsiballZ_file.py'
Jan 26 09:36:34 compute-0 sudo[67927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:35 compute-0 python3.9[67929]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:35 compute-0 sudo[67927]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:35 compute-0 sudo[68079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngggkpnuvlhzjwbzmwefphhafjnjvfzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420195.357455-1446-168532004301290/AnsiballZ_mount.py'
Jan 26 09:36:35 compute-0 sudo[68079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:36 compute-0 python3.9[68081]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 09:36:36 compute-0 sudo[68079]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:36 compute-0 sudo[68232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajpmosdzjgbylsrbvvlfkcttwhwzvpih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420196.2449267-1446-126439252939631/AnsiballZ_mount.py'
Jan 26 09:36:36 compute-0 sudo[68232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:36 compute-0 python3.9[68234]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 09:36:36 compute-0 sudo[68232]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:37 compute-0 sshd-session[59026]: Connection closed by 192.168.122.30 port 58960
Jan 26 09:36:37 compute-0 sshd-session[59023]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:36:37 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 26 09:36:37 compute-0 systemd[1]: session-14.scope: Consumed 36.003s CPU time.
Jan 26 09:36:37 compute-0 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Jan 26 09:36:37 compute-0 systemd-logind[787]: Removed session 14.
Jan 26 09:36:43 compute-0 sshd-session[68260]: Accepted publickey for zuul from 192.168.122.30 port 53872 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:36:43 compute-0 systemd-logind[787]: New session 15 of user zuul.
Jan 26 09:36:43 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 26 09:36:43 compute-0 sshd-session[68260]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:36:43 compute-0 sudo[68413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnypbtisoelhjanojwywsashgnqzivvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420203.2095573-18-60332329752717/AnsiballZ_tempfile.py'
Jan 26 09:36:43 compute-0 sudo[68413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:43 compute-0 python3.9[68415]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 26 09:36:43 compute-0 sudo[68413]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:45 compute-0 sudo[68565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeijrqcoyfvufwzrhlhbxmvbkxsctkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420205.063524-54-26533630780296/AnsiballZ_stat.py'
Jan 26 09:36:45 compute-0 sudo[68565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:45 compute-0 python3.9[68567]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:36:45 compute-0 sudo[68565]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:46 compute-0 sudo[68717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efxhuchcdpjwlrfjvotkoyobvslhkbub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420205.9135447-84-66278272970604/AnsiballZ_setup.py'
Jan 26 09:36:46 compute-0 sudo[68717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:46 compute-0 python3.9[68719]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:36:46 compute-0 sudo[68717]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:47 compute-0 sudo[68869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzepdtnlogamwqwkepjjkcjskigrxqyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420207.027303-109-16381274684314/AnsiballZ_blockinfile.py'
Jan 26 09:36:47 compute-0 sudo[68869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:47 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 09:36:47 compute-0 python3.9[68871]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm+Vrn31pimz+Of4pkRaSS+qazCMrOF2INZ0EZsyoNG5922K2xwdC9F6r4k2L54HPEpDiazPoDsOHQvs1I+CvayNM2D+8hZhvqxZOMimP8b056aM14nht9ADrJUnlaDs57FkgIKQdxma9I0sW8Up3bbLchFOj2grOjH7gRdUBxblzIS01/P5NV8/kPsRXDoCgx+QAxU2nEqyCQd0JXLKoy+v6t+pG7We9wFXXr2z4XmAx7yeU0Y6NsJ1Seies0apLTmfK3HAtj/3LObvZegqVGDFtl5spotTmJdPJUCZhniaUmyYZ4jtIEno86Bf8OhS3NvLsxmNXuJcInlmCHGXDP9FPBrxG+yVB63FUAeyejCXntEyOzXFp8fiCuOVQuqDTWB4UxTRYh3EqVruxhY1taarew/VfsxIAxv6BWsqtvh/6xtRtJ9vTSDHsDTRaOcChfT5BnATFJ+Ilwpve8C4bjRVdlStH+99TgtNPOg2Fxf8scyIHInM9c4Yn7g8YTiyk=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICrJdFptF1rp2hjeKcc0nSEhHvDtAYFU4gfqZN6U+WTb
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNa2lKVjuYCljd0rl1qDkTP3ZoTV9fkbcXvtxSizwygrF6dU+RWdeB3LOkT5U/2GTJuWvOqxJBc3Y1d0b3Dj5Do=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyi0WEBS9Gc5Xay4vqFSdv0cJGdtezg+CrNF/vjEeF3l4EhpAAj7XRLEhEU1kz0DDKkzclG65hBNPO4/9cfzEa31EsSmzOqjqZp5ri20HVDkiZlUTTklhrbJGydUw6mcy+rIN1qsUugVHwkA9ufZLvzm9wvljzL+WPt1o41GT42NdNzyfPfnqf7HMDziNUNUUZjqsoy+DQnlMl3c3NHiGysPJ6IssbLBCFzPdBHpEYmR8b44qlJEhx3RYWl3QLcXAyoK7VpPdFO4ltMT+0KVVbLO9IUrocCQ4HfafPn/mV1Rq3phDWvCTRfRo07Mu4Oc4XBu+RIk9tt1WTIdT/ZusPUNSkFgprdU9zFIHLR0KyIX4qRSuWBeB20Ic5pvkRvNtwLB8lPt4NVi7bmun6moO8nu6cOjJ61CCAobDSEL/Z2cG3ADucjCSKtWLM0eSdt6T71NmULMhdB8ljIK4em/NCf/qZWjYr70WKyIZ9b8N5lDO8NF1tbPJyu+O0ebq/JN8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAaib//yQ1QyvWijjfui4OBtTtMt7Dos+hlx8rucs2Tn
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7YXsQWyEQWSdy5tcEAtltn11CwuaqW/S8S3OB1580hTlcLZWLPDHbzSwNDf13HBG9wgLFgmueLB8U6J7wvvcM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0TZpcPGqQPKNdLKsJSWd1uRV3wOVDiIo3gYwVWAuH5m+Wvpw34ZI+6+d4y3DWMqDRZVWAVV0NNFB+b4MQeivx4S7KMCvBctzJ6VIyUDL5NZrwys0sYPH+33ncdZd6C8LrfCvIct+DbWCx72RQ+G0yRbYK1r/m5+dzW2411NqWn8kJkBUeLJIqT2vhFoNpO8NaWSVlWEgl5YunYEPS4v5NSM88ke6Gzc5X5sjxsz65REj6/1BXsA+quwcTAe/KC1/1Rr2cufefwf0uayM6sGuUDATjWIw36YqUeL9wc/IDdIEFEvj2hr/v+r6laaKMidOYJXBiQwIWpgWCOosSj4vrPQmDfqjOa8sAn7yWPVgxyARccavEO89zV2lpFcYTdqegPxjB90lD3Q1pMU6veJUWTRo0LAZ6n9rsRBgF0Mhr75T32Lbqf3KBro6/nPrp1XCD08mNv2cEYwp+put7vwvHzN1nPztqMsIDAMJMupwI+Buyr3xCPHe3hcAavahF+YM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbUUMKlV4hksqDn2YVVAHPCHip80h7zj0rReM94Ja2l
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFtD30BOt1BlR6BYm8DU7sxF5fAzZ/aciKetiRsXWlbsXS3Z4mVG1ZAF9AhArV+OaapsLeaQFybIC0e2fudJfos=
                                             create=True mode=0644 path=/tmp/ansible.82pdm1nb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:47 compute-0 sudo[68869]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:48 compute-0 sudo[69023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbgyjuavatbmwbrnbmhtcnpchdbaizs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420207.8312237-133-130393591658500/AnsiballZ_command.py'
Jan 26 09:36:48 compute-0 sudo[69023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:48 compute-0 python3.9[69025]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.82pdm1nb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:36:48 compute-0 sudo[69023]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:49 compute-0 sudo[69177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inpglnvyuthftgnnumbaerfeflhdpkkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420208.6587548-157-157622345180406/AnsiballZ_file.py'
Jan 26 09:36:49 compute-0 sudo[69177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:49 compute-0 python3.9[69179]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.82pdm1nb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:36:49 compute-0 sudo[69177]: pam_unix(sudo:session): session closed for user root
Jan 26 09:36:49 compute-0 sshd-session[68263]: Connection closed by 192.168.122.30 port 53872
Jan 26 09:36:49 compute-0 sshd-session[68260]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:36:49 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 26 09:36:49 compute-0 systemd[1]: session-15.scope: Consumed 3.238s CPU time.
Jan 26 09:36:49 compute-0 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Jan 26 09:36:49 compute-0 systemd-logind[787]: Removed session 15.
Jan 26 09:36:55 compute-0 sshd-session[69204]: Invalid user admin from 157.245.76.178 port 57564
Jan 26 09:36:55 compute-0 sshd-session[69204]: Connection closed by invalid user admin 157.245.76.178 port 57564 [preauth]
Jan 26 09:36:57 compute-0 sshd-session[69206]: Accepted publickey for zuul from 192.168.122.30 port 39172 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:36:57 compute-0 systemd-logind[787]: New session 16 of user zuul.
Jan 26 09:36:57 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 26 09:36:57 compute-0 sshd-session[69206]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:36:58 compute-0 python3.9[69359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:36:59 compute-0 sudo[69513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnkrafhpejmkmyfarsdjefncxqtinssp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420218.7936976-51-150151434422726/AnsiballZ_systemd.py'
Jan 26 09:36:59 compute-0 sudo[69513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:36:59 compute-0 python3.9[69515]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 09:36:59 compute-0 sudo[69513]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:00 compute-0 sudo[69667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mltiybmkkckoctszktgpndnomsjlnepc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420219.9702613-75-273656493475319/AnsiballZ_systemd.py'
Jan 26 09:37:00 compute-0 sudo[69667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:00 compute-0 python3.9[69669]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:37:00 compute-0 sudo[69667]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:01 compute-0 sudo[69820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kujxjqzmzivnkqnszvuaddhpbvharcfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420220.8553212-102-173841053672271/AnsiballZ_command.py'
Jan 26 09:37:01 compute-0 sudo[69820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:01 compute-0 python3.9[69822]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:37:01 compute-0 sudo[69820]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:01 compute-0 anacron[2726]: Job `cron.weekly' started
Jan 26 09:37:01 compute-0 anacron[2726]: Job `cron.weekly' terminated
Jan 26 09:37:02 compute-0 sudo[69975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbpkctjzjovzckbdvwpvklicneaiglkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420221.633332-126-278873184296940/AnsiballZ_stat.py'
Jan 26 09:37:02 compute-0 sudo[69975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:02 compute-0 python3.9[69977]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:37:02 compute-0 sudo[69975]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:02 compute-0 sudo[70129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzltbxbuqupeuwmjcrtcdkayjrxrilby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420222.4611967-150-226590780492554/AnsiballZ_command.py'
Jan 26 09:37:02 compute-0 sudo[70129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:02 compute-0 python3.9[70131]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:37:02 compute-0 sudo[70129]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:03 compute-0 sudo[70284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nixujojzsqhlptdcxtanwlnblnptrgfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420223.1681666-174-272326215564439/AnsiballZ_file.py'
Jan 26 09:37:03 compute-0 sudo[70284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:03 compute-0 python3.9[70286]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:37:03 compute-0 sudo[70284]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:04 compute-0 sshd-session[69209]: Connection closed by 192.168.122.30 port 39172
Jan 26 09:37:04 compute-0 sshd-session[69206]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:37:04 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 26 09:37:04 compute-0 systemd[1]: session-16.scope: Consumed 4.696s CPU time.
Jan 26 09:37:04 compute-0 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Jan 26 09:37:04 compute-0 systemd-logind[787]: Removed session 16.
Jan 26 09:37:09 compute-0 sshd-session[70311]: Accepted publickey for zuul from 192.168.122.30 port 44010 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:37:09 compute-0 systemd-logind[787]: New session 17 of user zuul.
Jan 26 09:37:09 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 26 09:37:09 compute-0 sshd-session[70311]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:37:10 compute-0 python3.9[70464]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:37:11 compute-0 sudo[70618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpngzgjozmljxqarcdnrhsrkyvjhdxlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420231.0731387-57-156242722149266/AnsiballZ_setup.py'
Jan 26 09:37:11 compute-0 sudo[70618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:11 compute-0 python3.9[70620]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:37:11 compute-0 sudo[70618]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:12 compute-0 sudo[70702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndzxpndpapgfqedaovmzopjbysszqhoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420231.0731387-57-156242722149266/AnsiballZ_dnf.py'
Jan 26 09:37:12 compute-0 sudo[70702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:12 compute-0 python3.9[70704]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 09:37:13 compute-0 sudo[70702]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:14 compute-0 python3.9[70855]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:37:15 compute-0 python3.9[71006]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 09:37:16 compute-0 python3.9[71156]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:37:16 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:37:16 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:37:17 compute-0 python3.9[71307]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:37:17 compute-0 sshd-session[70314]: Connection closed by 192.168.122.30 port 44010
Jan 26 09:37:17 compute-0 sshd-session[70311]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:37:17 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 26 09:37:17 compute-0 systemd[1]: session-17.scope: Consumed 5.814s CPU time.
Jan 26 09:37:17 compute-0 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Jan 26 09:37:17 compute-0 systemd-logind[787]: Removed session 17.
Jan 26 09:37:25 compute-0 sshd-session[71332]: Accepted publickey for zuul from 38.102.83.222 port 53582 ssh2: RSA SHA256:pzGu/8MlhtIDRxsRqlS4AZ6R7CLTQo7Ke10EmY50Qfo
Jan 26 09:37:25 compute-0 systemd-logind[787]: New session 18 of user zuul.
Jan 26 09:37:25 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 26 09:37:25 compute-0 sshd-session[71332]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:37:26 compute-0 sudo[71408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuqsqsrrpqwbcytezwyokqfbzcswgwsl ; /usr/bin/python3'
Jan 26 09:37:26 compute-0 sudo[71408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:26 compute-0 useradd[71412]: new group: name=ceph-admin, GID=42478
Jan 26 09:37:26 compute-0 useradd[71412]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 26 09:37:26 compute-0 sudo[71408]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:26 compute-0 sudo[71494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhtkhkujkvamwcmlzwkwlmkxiroykmwp ; /usr/bin/python3'
Jan 26 09:37:26 compute-0 sudo[71494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:26 compute-0 sudo[71494]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:27 compute-0 sudo[71567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqybzaegjoqjhkkhxokvelbhbddtpdlh ; /usr/bin/python3'
Jan 26 09:37:27 compute-0 sudo[71567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:27 compute-0 sudo[71567]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:27 compute-0 sudo[71617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzabzowcruqxyqfeetgmononlcjzapqn ; /usr/bin/python3'
Jan 26 09:37:27 compute-0 sudo[71617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:27 compute-0 sudo[71617]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:28 compute-0 sudo[71643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivacdsuuoutdopwladsdekuxmghrvyrh ; /usr/bin/python3'
Jan 26 09:37:28 compute-0 sudo[71643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:28 compute-0 sudo[71643]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:28 compute-0 sudo[71669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaqhjukzyghpzcvnikjxhipsdazttunr ; /usr/bin/python3'
Jan 26 09:37:28 compute-0 sudo[71669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:28 compute-0 sudo[71669]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:29 compute-0 sudo[71695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eygmxdrpjchixuilwzlajsaprlxzqikh ; /usr/bin/python3'
Jan 26 09:37:29 compute-0 sudo[71695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:29 compute-0 sudo[71695]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:29 compute-0 sudo[71773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whfpokvopvckqmzbqbldhjuvgsfvyhgy ; /usr/bin/python3'
Jan 26 09:37:29 compute-0 sudo[71773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:29 compute-0 sudo[71773]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:29 compute-0 sudo[71846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvpafrpravnhcpplvuhzcbuqklduxeua ; /usr/bin/python3'
Jan 26 09:37:29 compute-0 sudo[71846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:29 compute-0 sudo[71846]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:30 compute-0 sudo[71948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmykrqebuhfajuibgfxtdksgodcyqucn ; /usr/bin/python3'
Jan 26 09:37:30 compute-0 sudo[71948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:30 compute-0 sudo[71948]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:30 compute-0 sudo[72021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhfjomzyafufhnnkvrhvvfwfostwibsn ; /usr/bin/python3'
Jan 26 09:37:30 compute-0 sudo[72021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:30 compute-0 sudo[72021]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:31 compute-0 sudo[72071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-silydozacvsckemhogsetembhnedirgm ; /usr/bin/python3'
Jan 26 09:37:31 compute-0 sudo[72071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:31 compute-0 python3[72073]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:37:32 compute-0 sudo[72071]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:33 compute-0 sudo[72167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwuoybujjmpncxlrrbybvylabwygevyx ; /usr/bin/python3'
Jan 26 09:37:33 compute-0 sudo[72167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:33 compute-0 python3[72169]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 09:37:34 compute-0 sudo[72167]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:34 compute-0 sudo[72194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibjynswatscspfqtzvxfuphxbpakpooc ; /usr/bin/python3'
Jan 26 09:37:34 compute-0 sudo[72194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:35 compute-0 python3[72196]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:37:35 compute-0 sudo[72194]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:35 compute-0 sudo[72220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ispbctlxpvyfnzhddxldixzpghrimddl ; /usr/bin/python3'
Jan 26 09:37:35 compute-0 sudo[72220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:35 compute-0 python3[72222]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:37:35 compute-0 kernel: loop: module loaded
Jan 26 09:37:35 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 26 09:37:35 compute-0 sudo[72220]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:35 compute-0 sudo[72256]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbdegdsnirpebicvniklamsnlzvorxrr ; /usr/bin/python3'
Jan 26 09:37:35 compute-0 sudo[72256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:35 compute-0 python3[72258]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:37:35 compute-0 lvm[72261]: PV /dev/loop3 not used.
Jan 26 09:37:35 compute-0 lvm[72270]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:37:36 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 26 09:37:36 compute-0 sudo[72256]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:36 compute-0 lvm[72272]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 26 09:37:36 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 26 09:37:36 compute-0 sudo[72348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnemtyphnuzsgwaymacumugkxipwzbol ; /usr/bin/python3'
Jan 26 09:37:36 compute-0 sudo[72348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:36 compute-0 python3[72350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:37:36 compute-0 sudo[72348]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:36 compute-0 sudo[72421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgakfswmzvsfabithdrtwlrekbuvfyl ; /usr/bin/python3'
Jan 26 09:37:36 compute-0 sudo[72421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:36 compute-0 python3[72423]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420256.3285275-36915-140470430848606/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:37:36 compute-0 sudo[72421]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:37 compute-0 sudo[72471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieqbzzirgjgyrstbtnbywylqlqxwrswp ; /usr/bin/python3'
Jan 26 09:37:37 compute-0 sudo[72471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:37 compute-0 python3[72473]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:37:37 compute-0 systemd[1]: Reloading.
Jan 26 09:37:37 compute-0 systemd-rc-local-generator[72502]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:37:37 compute-0 systemd-sysv-generator[72506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:37:38 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 26 09:37:38 compute-0 bash[72512]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 26 09:37:38 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 26 09:37:38 compute-0 sudo[72471]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:38 compute-0 lvm[72513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:37:38 compute-0 lvm[72513]: VG ceph_vg0 finished
Jan 26 09:37:39 compute-0 sshd-session[72514]: Invalid user admin from 157.245.76.178 port 56824
Jan 26 09:37:39 compute-0 sshd-session[72514]: Connection closed by invalid user admin 157.245.76.178 port 56824 [preauth]
Jan 26 09:37:40 compute-0 chronyd[58542]: Selected source 167.160.187.12 (pool.ntp.org)
Jan 26 09:37:40 compute-0 python3[72539]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:37:42 compute-0 sudo[72630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylzwwsskvpdqtpbfvvzmakbjszbappa ; /usr/bin/python3'
Jan 26 09:37:42 compute-0 sudo[72630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:43 compute-0 python3[72632]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 09:37:45 compute-0 sudo[72630]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:45 compute-0 sudo[72687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcooretpkoihdtvwgegcosrxquhlwwvs ; /usr/bin/python3'
Jan 26 09:37:45 compute-0 sudo[72687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:45 compute-0 python3[72689]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 09:37:47 compute-0 groupadd[72699]: group added to /etc/group: name=cephadm, GID=993
Jan 26 09:37:47 compute-0 groupadd[72699]: group added to /etc/gshadow: name=cephadm
Jan 26 09:37:47 compute-0 groupadd[72699]: new group: name=cephadm, GID=993
Jan 26 09:37:47 compute-0 useradd[72706]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 26 09:37:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:37:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:37:47 compute-0 sudo[72687]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:37:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:37:48 compute-0 systemd[1]: run-r5bad3524b6f24f5ba9d85fe6bf36266a.service: Deactivated successfully.
Jan 26 09:37:48 compute-0 sudo[72802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwecabjbshvktvnaeypjalxyzpdnqkgl ; /usr/bin/python3'
Jan 26 09:37:48 compute-0 sudo[72802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:48 compute-0 python3[72804]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:37:48 compute-0 sudo[72802]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:48 compute-0 sudo[72830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdwcfjjdrpeuvojyjxzmzwjvhyqutntp ; /usr/bin/python3'
Jan 26 09:37:48 compute-0 sudo[72830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:48 compute-0 python3[72832]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:37:49 compute-0 sudo[72830]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:49 compute-0 sudo[72891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-breyeufaycmxqaaaqiyuvnezvlefiesv ; /usr/bin/python3'
Jan 26 09:37:49 compute-0 sudo[72891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:49 compute-0 python3[72893]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:37:49 compute-0 sudo[72891]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:49 compute-0 sudo[72917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omwumxckaheltauafcokrcfnegbsapjp ; /usr/bin/python3'
Jan 26 09:37:49 compute-0 sudo[72917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:49 compute-0 python3[72919]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:37:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:37:49 compute-0 sudo[72917]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:50 compute-0 sudo[72995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbtqwpdiizowykoukmyqbdbjkyxjtoag ; /usr/bin/python3'
Jan 26 09:37:50 compute-0 sudo[72995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:50 compute-0 python3[72997]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:37:50 compute-0 sudo[72995]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:50 compute-0 sudo[73068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgfqrlzhlidswptihlromaquqcbeiqa ; /usr/bin/python3'
Jan 26 09:37:50 compute-0 sudo[73068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:50 compute-0 python3[73070]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420270.2493534-37109-276916969718551/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:37:50 compute-0 sudo[73068]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:51 compute-0 sudo[73170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvgcqusyqiibhwhfnskjkmikucoffeub ; /usr/bin/python3'
Jan 26 09:37:51 compute-0 sudo[73170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:51 compute-0 python3[73172]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:37:51 compute-0 sudo[73170]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:52 compute-0 sudo[73243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqpzqtqsqyzrsgovdivrgjuzsjzenauk ; /usr/bin/python3'
Jan 26 09:37:52 compute-0 sudo[73243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:52 compute-0 python3[73245]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420271.6418827-37127-46933996201755/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:37:52 compute-0 sudo[73243]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:52 compute-0 sudo[73293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyqmrwyzgmdukyjkcrvgscrxhsmclbdo ; /usr/bin/python3'
Jan 26 09:37:52 compute-0 sudo[73293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:52 compute-0 python3[73295]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:37:52 compute-0 sudo[73293]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:52 compute-0 sudo[73321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudmfinwpdukasmjhhyqqzlpqomsbdon ; /usr/bin/python3'
Jan 26 09:37:52 compute-0 sudo[73321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:53 compute-0 python3[73323]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:37:53 compute-0 sudo[73321]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:53 compute-0 sudo[73349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgbtmcmhfvrgwlfdqealqshmqimzigm ; /usr/bin/python3'
Jan 26 09:37:53 compute-0 sudo[73349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:53 compute-0 python3[73351]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:37:53 compute-0 sudo[73349]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:53 compute-0 python3[73377]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:37:53 compute-0 sudo[73401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vueobknwfbjfwlnctneeuytzjlrvazxx ; /usr/bin/python3'
Jan 26 09:37:53 compute-0 sudo[73401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:37:54 compute-0 python3[73403]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:37:54 compute-0 sshd-session[73407]: Accepted publickey for ceph-admin from 192.168.122.100 port 36688 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:37:54 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 26 09:37:54 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 26 09:37:54 compute-0 systemd-logind[787]: New session 19 of user ceph-admin.
Jan 26 09:37:54 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 26 09:37:54 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 26 09:37:54 compute-0 systemd[73411]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:37:54 compute-0 systemd[73411]: Queued start job for default target Main User Target.
Jan 26 09:37:54 compute-0 systemd[73411]: Created slice User Application Slice.
Jan 26 09:37:54 compute-0 systemd[73411]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 09:37:54 compute-0 systemd[73411]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 09:37:54 compute-0 systemd[73411]: Reached target Paths.
Jan 26 09:37:54 compute-0 systemd[73411]: Reached target Timers.
Jan 26 09:37:54 compute-0 systemd[73411]: Starting D-Bus User Message Bus Socket...
Jan 26 09:37:54 compute-0 systemd[73411]: Starting Create User's Volatile Files and Directories...
Jan 26 09:37:54 compute-0 systemd[73411]: Listening on D-Bus User Message Bus Socket.
Jan 26 09:37:54 compute-0 systemd[73411]: Reached target Sockets.
Jan 26 09:37:54 compute-0 systemd[73411]: Finished Create User's Volatile Files and Directories.
Jan 26 09:37:54 compute-0 systemd[73411]: Reached target Basic System.
Jan 26 09:37:54 compute-0 systemd[73411]: Reached target Main User Target.
Jan 26 09:37:54 compute-0 systemd[73411]: Startup finished in 112ms.
Jan 26 09:37:54 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 26 09:37:54 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 26 09:37:54 compute-0 sshd-session[73407]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:37:54 compute-0 sudo[73427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 26 09:37:54 compute-0 sudo[73427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:37:54 compute-0 sudo[73427]: pam_unix(sudo:session): session closed for user root
Jan 26 09:37:54 compute-0 sshd-session[73426]: Received disconnect from 192.168.122.100 port 36688:11: disconnected by user
Jan 26 09:37:54 compute-0 sshd-session[73426]: Disconnected from user ceph-admin 192.168.122.100 port 36688
Jan 26 09:37:54 compute-0 sshd-session[73407]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:37:54 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 26 09:37:54 compute-0 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Jan 26 09:37:54 compute-0 systemd-logind[787]: Removed session 19.
Jan 26 09:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4221338456-lower\x2dmapped.mount: Deactivated successfully.
Jan 26 09:38:04 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 26 09:38:04 compute-0 systemd[73411]: Activating special unit Exit the Session...
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped target Main User Target.
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped target Basic System.
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped target Paths.
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped target Sockets.
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped target Timers.
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 26 09:38:04 compute-0 systemd[73411]: Closed D-Bus User Message Bus Socket.
Jan 26 09:38:04 compute-0 systemd[73411]: Stopped Create User's Volatile Files and Directories.
Jan 26 09:38:04 compute-0 systemd[73411]: Removed slice User Application Slice.
Jan 26 09:38:04 compute-0 systemd[73411]: Reached target Shutdown.
Jan 26 09:38:04 compute-0 systemd[73411]: Finished Exit the Session.
Jan 26 09:38:04 compute-0 systemd[73411]: Reached target Exit the Session.
Jan 26 09:38:04 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 26 09:38:04 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 26 09:38:04 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 26 09:38:04 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 26 09:38:04 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 26 09:38:04 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 26 09:38:04 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 26 09:38:13 compute-0 podman[73505]: 2026-01-26 09:38:13.164761066 +0000 UTC m=+18.225061147 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:13 compute-0 podman[73574]: 2026-01-26 09:38:13.265415771 +0000 UTC m=+0.079412427 container create 8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e (image=quay.io/ceph/ceph:v19, name=angry_blackburn, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:38:13 compute-0 podman[73574]: 2026-01-26 09:38:13.206510145 +0000 UTC m=+0.020506821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:13 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 26 09:38:13 compute-0 systemd[1]: Started libpod-conmon-8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e.scope.
Jan 26 09:38:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:13 compute-0 podman[73574]: 2026-01-26 09:38:13.357913936 +0000 UTC m=+0.171910602 container init 8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e (image=quay.io/ceph/ceph:v19, name=angry_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:38:13 compute-0 podman[73574]: 2026-01-26 09:38:13.364714601 +0000 UTC m=+0.178711247 container start 8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e (image=quay.io/ceph/ceph:v19, name=angry_blackburn, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:13 compute-0 podman[73574]: 2026-01-26 09:38:13.368768372 +0000 UTC m=+0.182765048 container attach 8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e (image=quay.io/ceph/ceph:v19, name=angry_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:38:13 compute-0 angry_blackburn[73590]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 26 09:38:13 compute-0 systemd[1]: libpod-8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e.scope: Deactivated successfully.
Jan 26 09:38:13 compute-0 podman[73574]: 2026-01-26 09:38:13.455727784 +0000 UTC m=+0.269724480 container died 8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e (image=quay.io/ceph/ceph:v19, name=angry_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-263e0c41f86ac7795092e81a703d4c41484d33ed6ccd6f3c7a80653674989daa-merged.mount: Deactivated successfully.
Jan 26 09:38:13 compute-0 podman[73574]: 2026-01-26 09:38:13.532469418 +0000 UTC m=+0.346466064 container remove 8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e (image=quay.io/ceph/ceph:v19, name=angry_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:38:13 compute-0 systemd[1]: libpod-conmon-8f2b6e41e2801cdb8189e60221c6578a019c9541975d9bbea508afdfa280539e.scope: Deactivated successfully.
Jan 26 09:38:13 compute-0 podman[73610]: 2026-01-26 09:38:13.590590244 +0000 UTC m=+0.038099491 container create 1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04 (image=quay.io/ceph/ceph:v19, name=heuristic_tesla, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Jan 26 09:38:13 compute-0 systemd[1]: Started libpod-conmon-1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04.scope.
Jan 26 09:38:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:13 compute-0 podman[73610]: 2026-01-26 09:38:13.573280252 +0000 UTC m=+0.020789519 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:13 compute-0 podman[73610]: 2026-01-26 09:38:13.807323717 +0000 UTC m=+0.254832974 container init 1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04 (image=quay.io/ceph/ceph:v19, name=heuristic_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:38:13 compute-0 podman[73610]: 2026-01-26 09:38:13.811972834 +0000 UTC m=+0.259482091 container start 1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04 (image=quay.io/ceph/ceph:v19, name=heuristic_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:38:13 compute-0 podman[73610]: 2026-01-26 09:38:13.815126179 +0000 UTC m=+0.262635486 container attach 1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04 (image=quay.io/ceph/ceph:v19, name=heuristic_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 09:38:13 compute-0 heuristic_tesla[73627]: 167 167
Jan 26 09:38:13 compute-0 systemd[1]: libpod-1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04.scope: Deactivated successfully.
Jan 26 09:38:13 compute-0 podman[73610]: 2026-01-26 09:38:13.81770708 +0000 UTC m=+0.265216317 container died 1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04 (image=quay.io/ceph/ceph:v19, name=heuristic_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 09:38:13 compute-0 podman[73610]: 2026-01-26 09:38:13.857778784 +0000 UTC m=+0.305288021 container remove 1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04 (image=quay.io/ceph/ceph:v19, name=heuristic_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:13 compute-0 systemd[1]: libpod-conmon-1da2a85452fa7544d20241d9b01b10825831f501a71ce7e5abc076220834ef04.scope: Deactivated successfully.
Jan 26 09:38:13 compute-0 podman[73644]: 2026-01-26 09:38:13.940440459 +0000 UTC m=+0.058604270 container create a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385 (image=quay.io/ceph/ceph:v19, name=jovial_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 26 09:38:14 compute-0 podman[73644]: 2026-01-26 09:38:13.904731194 +0000 UTC m=+0.022895015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:14 compute-0 systemd[1]: Started libpod-conmon-a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385.scope.
Jan 26 09:38:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:14 compute-0 podman[73644]: 2026-01-26 09:38:14.076106931 +0000 UTC m=+0.194270732 container init a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385 (image=quay.io/ceph/ceph:v19, name=jovial_yonath, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:14 compute-0 podman[73644]: 2026-01-26 09:38:14.080808049 +0000 UTC m=+0.198971850 container start a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385 (image=quay.io/ceph/ceph:v19, name=jovial_yonath, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 09:38:14 compute-0 podman[73644]: 2026-01-26 09:38:14.084847649 +0000 UTC m=+0.203011500 container attach a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385 (image=quay.io/ceph/ceph:v19, name=jovial_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:14 compute-0 jovial_yonath[73660]: AQAGNndpHJ7hBRAAwHgaS/O7BD3+PX5Up/bX7w==
Jan 26 09:38:14 compute-0 systemd[1]: libpod-a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385.scope: Deactivated successfully.
Jan 26 09:38:14 compute-0 podman[73644]: 2026-01-26 09:38:14.101965016 +0000 UTC m=+0.220128807 container died a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385 (image=quay.io/ceph/ceph:v19, name=jovial_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eda58c87c78e92643eb32907fb2c6da21794fc5d164d4cbb1a9e1e2d7b77911-merged.mount: Deactivated successfully.
Jan 26 09:38:14 compute-0 podman[73644]: 2026-01-26 09:38:14.187370086 +0000 UTC m=+0.305533877 container remove a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385 (image=quay.io/ceph/ceph:v19, name=jovial_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:14 compute-0 systemd[1]: libpod-conmon-a7c8f370d75237460f39b6e9325e47bd232116878fcda483399d12f072a92385.scope: Deactivated successfully.
Jan 26 09:38:14 compute-0 podman[73681]: 2026-01-26 09:38:14.241650907 +0000 UTC m=+0.036345502 container create 54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245 (image=quay.io/ceph/ceph:v19, name=stoic_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:14 compute-0 systemd[1]: Started libpod-conmon-54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245.scope.
Jan 26 09:38:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:14 compute-0 podman[73681]: 2026-01-26 09:38:14.22527452 +0000 UTC m=+0.019969115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:14 compute-0 podman[73681]: 2026-01-26 09:38:14.415960513 +0000 UTC m=+0.210655128 container init 54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245 (image=quay.io/ceph/ceph:v19, name=stoic_chebyshev, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 09:38:14 compute-0 podman[73681]: 2026-01-26 09:38:14.420909138 +0000 UTC m=+0.215603733 container start 54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245 (image=quay.io/ceph/ceph:v19, name=stoic_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:14 compute-0 stoic_chebyshev[73698]: AQAGNndpO3VfGhAAy4VziC6F2Poo3njrqI3euw==
Jan 26 09:38:14 compute-0 systemd[1]: libpod-54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245.scope: Deactivated successfully.
Jan 26 09:38:14 compute-0 podman[73681]: 2026-01-26 09:38:14.454607017 +0000 UTC m=+0.249301622 container attach 54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245 (image=quay.io/ceph/ceph:v19, name=stoic_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 26 09:38:14 compute-0 podman[73681]: 2026-01-26 09:38:14.454926056 +0000 UTC m=+0.249620651 container died 54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245 (image=quay.io/ceph/ceph:v19, name=stoic_chebyshev, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:14 compute-0 podman[73681]: 2026-01-26 09:38:14.927643842 +0000 UTC m=+0.722338477 container remove 54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245 (image=quay.io/ceph/ceph:v19, name=stoic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:15 compute-0 systemd[1]: libpod-conmon-54a9ef114c5693b6bae0b5f9c1d01c48dccdfb3170448a72f7998b77662d2245.scope: Deactivated successfully.
Jan 26 09:38:15 compute-0 podman[73717]: 2026-01-26 09:38:15.001387206 +0000 UTC m=+0.040548828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:15 compute-0 podman[73717]: 2026-01-26 09:38:15.09540607 +0000 UTC m=+0.134567652 container create c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea (image=quay.io/ceph/ceph:v19, name=tender_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 09:38:18 compute-0 systemd[1]: Started libpod-conmon-c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea.scope.
Jan 26 09:38:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:18 compute-0 podman[73717]: 2026-01-26 09:38:18.346703738 +0000 UTC m=+3.385865310 container init c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea (image=quay.io/ceph/ceph:v19, name=tender_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:38:18 compute-0 podman[73717]: 2026-01-26 09:38:18.353163094 +0000 UTC m=+3.392324676 container start c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea (image=quay.io/ceph/ceph:v19, name=tender_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:18 compute-0 podman[73717]: 2026-01-26 09:38:18.357940194 +0000 UTC m=+3.397101776 container attach c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea (image=quay.io/ceph/ceph:v19, name=tender_sanderson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 09:38:18 compute-0 tender_sanderson[73733]: AQAKNndpsCKVFhAAdObGDbJUG7/U/xB+wG3CmQ==
Jan 26 09:38:18 compute-0 systemd[1]: libpod-c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea.scope: Deactivated successfully.
Jan 26 09:38:18 compute-0 podman[73717]: 2026-01-26 09:38:18.3834522 +0000 UTC m=+3.422613792 container died c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea (image=quay.io/ceph/ceph:v19, name=tender_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e0571feaffce670988c7b1744f48a2e684c3f03046f262f2b26e83a01ebbbc-merged.mount: Deactivated successfully.
Jan 26 09:38:18 compute-0 podman[73717]: 2026-01-26 09:38:18.829096869 +0000 UTC m=+3.868258451 container remove c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea (image=quay.io/ceph/ceph:v19, name=tender_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:18 compute-0 systemd[1]: libpod-conmon-c1237bb8a5e39af56ef217bdd525da98d7400b6567893a7235bde5fdad7cceea.scope: Deactivated successfully.
Jan 26 09:38:18 compute-0 podman[73754]: 2026-01-26 09:38:18.952276771 +0000 UTC m=+0.089308568 container create 732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:38:18 compute-0 systemd[1]: Started libpod-conmon-732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a.scope.
Jan 26 09:38:18 compute-0 podman[73754]: 2026-01-26 09:38:18.90204391 +0000 UTC m=+0.039075697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db36012cd6d8a1f74101790266e1172c1964f4425252c3c323eab13a1fd0c66/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:19 compute-0 podman[73754]: 2026-01-26 09:38:19.040112106 +0000 UTC m=+0.177143973 container init 732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:38:19 compute-0 podman[73754]: 2026-01-26 09:38:19.045669638 +0000 UTC m=+0.182701415 container start 732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:19 compute-0 podman[73754]: 2026-01-26 09:38:19.049256876 +0000 UTC m=+0.186288673 container attach 732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:38:19 compute-0 jolly_kapitsa[73771]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 26 09:38:19 compute-0 jolly_kapitsa[73771]: setting min_mon_release = quincy
Jan 26 09:38:19 compute-0 jolly_kapitsa[73771]: /usr/bin/monmaptool: set fsid to 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:19 compute-0 jolly_kapitsa[73771]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 26 09:38:19 compute-0 systemd[1]: libpod-732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a.scope: Deactivated successfully.
Jan 26 09:38:19 compute-0 podman[73754]: 2026-01-26 09:38:19.07211405 +0000 UTC m=+0.209145837 container died 732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:38:19 compute-0 podman[73754]: 2026-01-26 09:38:19.163359019 +0000 UTC m=+0.300390846 container remove 732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a (image=quay.io/ceph/ceph:v19, name=jolly_kapitsa, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:19 compute-0 systemd[1]: libpod-conmon-732cf0f1ff77ca5caf33b246edd97ee6257b26a9f9228fb6e00f4b8f92e3727a.scope: Deactivated successfully.
Jan 26 09:38:19 compute-0 podman[73790]: 2026-01-26 09:38:19.257502178 +0000 UTC m=+0.063147474 container create df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12 (image=quay.io/ceph/ceph:v19, name=admiring_ganguly, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 09:38:19 compute-0 podman[73790]: 2026-01-26 09:38:19.225222237 +0000 UTC m=+0.030867563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:19 compute-0 systemd[1]: Started libpod-conmon-df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12.scope.
Jan 26 09:38:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0afc6abb5f7c0f1317d4270126c67653c40f5d88abd77b1a4a15a1797037de3c/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0afc6abb5f7c0f1317d4270126c67653c40f5d88abd77b1a4a15a1797037de3c/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0afc6abb5f7c0f1317d4270126c67653c40f5d88abd77b1a4a15a1797037de3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0afc6abb5f7c0f1317d4270126c67653c40f5d88abd77b1a4a15a1797037de3c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:19 compute-0 podman[73790]: 2026-01-26 09:38:19.363747196 +0000 UTC m=+0.169392512 container init df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12 (image=quay.io/ceph/ceph:v19, name=admiring_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 09:38:19 compute-0 podman[73790]: 2026-01-26 09:38:19.368898466 +0000 UTC m=+0.174543762 container start df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12 (image=quay.io/ceph/ceph:v19, name=admiring_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 09:38:19 compute-0 podman[73790]: 2026-01-26 09:38:19.372360942 +0000 UTC m=+0.178006238 container attach df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12 (image=quay.io/ceph/ceph:v19, name=admiring_ganguly, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:38:19 compute-0 systemd[1]: libpod-df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12.scope: Deactivated successfully.
Jan 26 09:38:19 compute-0 podman[73790]: 2026-01-26 09:38:19.602923272 +0000 UTC m=+0.408568638 container died df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12 (image=quay.io/ceph/ceph:v19, name=admiring_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0afc6abb5f7c0f1317d4270126c67653c40f5d88abd77b1a4a15a1797037de3c-merged.mount: Deactivated successfully.
Jan 26 09:38:19 compute-0 podman[73790]: 2026-01-26 09:38:19.711513015 +0000 UTC m=+0.517158311 container remove df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12 (image=quay.io/ceph/ceph:v19, name=admiring_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:19 compute-0 systemd[1]: libpod-conmon-df0bbd8bd0df38dae6020e973b767450fd62121c7ec7c58802774d16073bed12.scope: Deactivated successfully.
Jan 26 09:38:19 compute-0 systemd[1]: Reloading.
Jan 26 09:38:20 compute-0 systemd-rc-local-generator[73874]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:38:20 compute-0 systemd-sysv-generator[73878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:38:20 compute-0 systemd[1]: Reloading.
Jan 26 09:38:20 compute-0 systemd-sysv-generator[73915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:38:20 compute-0 systemd-rc-local-generator[73910]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:38:20 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 26 09:38:20 compute-0 systemd[1]: Reloading.
Jan 26 09:38:20 compute-0 systemd-rc-local-generator[73950]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:38:20 compute-0 systemd-sysv-generator[73953]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:38:20 compute-0 systemd[1]: Reached target Ceph cluster 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:38:20 compute-0 systemd[1]: Reloading.
Jan 26 09:38:20 compute-0 systemd-rc-local-generator[73988]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:38:20 compute-0 systemd-sysv-generator[73992]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:38:21 compute-0 systemd[1]: Reloading.
Jan 26 09:38:21 compute-0 systemd-sysv-generator[74029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:38:21 compute-0 systemd-rc-local-generator[74026]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:38:21 compute-0 systemd[1]: Created slice Slice /system/ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:38:21 compute-0 systemd[1]: Reached target System Time Set.
Jan 26 09:38:21 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 26 09:38:21 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:21 compute-0 podman[74082]: 2026-01-26 09:38:21.740494443 +0000 UTC m=+0.051014583 container create c8d20851ea0c1a6362a12a9da680f70b4e002198eb74813bb384763ba54acdf9 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63211c680a984be17c538bfd6c15070c1f303424c639bceb664eb00659e47cab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63211c680a984be17c538bfd6c15070c1f303424c639bceb664eb00659e47cab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63211c680a984be17c538bfd6c15070c1f303424c639bceb664eb00659e47cab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63211c680a984be17c538bfd6c15070c1f303424c639bceb664eb00659e47cab/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:21 compute-0 podman[74082]: 2026-01-26 09:38:21.710456513 +0000 UTC m=+0.020976673 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:21 compute-0 podman[74082]: 2026-01-26 09:38:21.833944463 +0000 UTC m=+0.144464633 container init c8d20851ea0c1a6362a12a9da680f70b4e002198eb74813bb384763ba54acdf9 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 09:38:21 compute-0 podman[74082]: 2026-01-26 09:38:21.840490491 +0000 UTC m=+0.151010671 container start c8d20851ea0c1a6362a12a9da680f70b4e002198eb74813bb384763ba54acdf9 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:38:21 compute-0 bash[74082]: c8d20851ea0c1a6362a12a9da680f70b4e002198eb74813bb384763ba54acdf9
Jan 26 09:38:21 compute-0 systemd[1]: Started Ceph mon.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:38:21 compute-0 ceph-mon[74102]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: pidfile_write: ignore empty --pid-file
Jan 26 09:38:21 compute-0 ceph-mon[74102]: load: jerasure load: lrc 
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: RocksDB version: 7.9.2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Git sha 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: DB SUMMARY
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: DB Session ID:  K5I8M1OMDXXN8OHC105K
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: CURRENT file:  CURRENT
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                         Options.error_if_exists: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                       Options.create_if_missing: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                                     Options.env: 0x5593b71b3c20
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                                Options.info_log: 0x5593b8c35940
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                              Options.statistics: (nil)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                               Options.use_fsync: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                              Options.db_log_dir: 
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                                 Options.wal_dir: 
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                    Options.write_buffer_manager: 0x5593b8c39900
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.unordered_write: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                               Options.row_cache: None
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                              Options.wal_filter: None
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.two_write_queues: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.wal_compression: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.atomic_flush: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.max_background_jobs: 2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.max_background_compactions: -1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.max_subcompactions: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.max_total_wal_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                          Options.max_open_files: -1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:       Options.compaction_readahead_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Compression algorithms supported:
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kZSTD supported: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kXpressCompression supported: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kBZip2Compression supported: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kLZ4Compression supported: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kZlibCompression supported: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kLZ4HCCompression supported: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         kSnappyCompression supported: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:           Options.merge_operator: 
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:        Options.compaction_filter: None
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5593b8c355e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5593b8c589b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:        Options.write_buffer_size: 33554432
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:  Options.max_write_buffer_number: 2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:          Options.compression: NoCompression
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.num_levels: 7
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 61a73b27-20ff-4d9e-babd-7b87c9b5b4e0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420301878418, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420301880170, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "K5I8M1OMDXXN8OHC105K", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420301880296, "job": 1, "event": "recovery_finished"}
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5593b8c5ae00
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: DB pointer 0x5593b8c6a000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 09:38:21 compute-0 ceph-mon[74102]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5593b8c589b0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 09:38:21 compute-0 ceph-mon[74102]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@-1(???) e0 preinit fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 26 09:38:21 compute-0 ceph-mon[74102]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 26 09:38:21 compute-0 ceph-mon[74102]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 26 09:38:21 compute-0 podman[74109]: 2026-01-26 09:38:21.971179657 +0000 UTC m=+0.070990498 container create 17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7 (image=quay.io/ceph/ceph:v19, name=focused_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : last_changed 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : created 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 26 09:38:21 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:38:21 compute-0 ceph-mon[74102]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).mds e1 new map
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-26T09:38:21:975599+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 26 09:38:22 compute-0 podman[74109]: 2026-01-26 09:38:21.939388009 +0000 UTC m=+0.039198900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:22 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mkfs 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 26 09:38:22 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 26 09:38:22 compute-0 ceph-mon[74102]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 09:38:22 compute-0 systemd[1]: Started libpod-conmon-17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7.scope.
Jan 26 09:38:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a63aacbc8df6d8f294927c1a1b7af002de42a93a47110db5a296e07a70445f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a63aacbc8df6d8f294927c1a1b7af002de42a93a47110db5a296e07a70445f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a63aacbc8df6d8f294927c1a1b7af002de42a93a47110db5a296e07a70445f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:22 compute-0 podman[74109]: 2026-01-26 09:38:22.113693995 +0000 UTC m=+0.213504796 container init 17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7 (image=quay.io/ceph/ceph:v19, name=focused_boyd, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:22 compute-0 podman[74109]: 2026-01-26 09:38:22.122106494 +0000 UTC m=+0.221917295 container start 17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7 (image=quay.io/ceph/ceph:v19, name=focused_boyd, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 26 09:38:22 compute-0 podman[74109]: 2026-01-26 09:38:22.125809766 +0000 UTC m=+0.225620567 container attach 17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7 (image=quay.io/ceph/ceph:v19, name=focused_boyd, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:38:22 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 26 09:38:22 compute-0 ceph-mon[74102]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681539675' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 09:38:22 compute-0 focused_boyd[74157]:   cluster:
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     id:     1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     health: HEALTH_OK
Jan 26 09:38:22 compute-0 focused_boyd[74157]:  
Jan 26 09:38:22 compute-0 focused_boyd[74157]:   services:
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     mon: 1 daemons, quorum compute-0 (age 0.345135s)
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     mgr: no daemons active
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     osd: 0 osds: 0 up, 0 in
Jan 26 09:38:22 compute-0 focused_boyd[74157]:  
Jan 26 09:38:22 compute-0 focused_boyd[74157]:   data:
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     pools:   0 pools, 0 pgs
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     objects: 0 objects, 0 B
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     usage:   0 B used, 0 B / 0 B avail
Jan 26 09:38:22 compute-0 focused_boyd[74157]:     pgs:     
Jan 26 09:38:22 compute-0 focused_boyd[74157]:  
Jan 26 09:38:22 compute-0 systemd[1]: libpod-17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7.scope: Deactivated successfully.
Jan 26 09:38:22 compute-0 conmon[74157]: conmon 17995da3a2c958fdcec7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7.scope/container/memory.events
Jan 26 09:38:22 compute-0 podman[74109]: 2026-01-26 09:38:22.337487341 +0000 UTC m=+0.437298162 container died 17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7 (image=quay.io/ceph/ceph:v19, name=focused_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:38:22 compute-0 podman[74109]: 2026-01-26 09:38:22.600883097 +0000 UTC m=+0.700693888 container remove 17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7 (image=quay.io/ceph/ceph:v19, name=focused_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:38:22 compute-0 systemd[1]: libpod-conmon-17995da3a2c958fdcec75f8aeb506be2cebdea1d259d36e07644e9cca23430c7.scope: Deactivated successfully.
Jan 26 09:38:22 compute-0 podman[74195]: 2026-01-26 09:38:22.724605973 +0000 UTC m=+0.098526990 container create c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f (image=quay.io/ceph/ceph:v19, name=determined_edison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:38:22 compute-0 podman[74195]: 2026-01-26 09:38:22.658098858 +0000 UTC m=+0.032019855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:22 compute-0 systemd[1]: Started libpod-conmon-c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f.scope.
Jan 26 09:38:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f737c37f968b29938f93cd4a6051387aee516b049227c4b6e355a4d88684a5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f737c37f968b29938f93cd4a6051387aee516b049227c4b6e355a4d88684a5e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f737c37f968b29938f93cd4a6051387aee516b049227c4b6e355a4d88684a5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f737c37f968b29938f93cd4a6051387aee516b049227c4b6e355a4d88684a5e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:22 compute-0 podman[74195]: 2026-01-26 09:38:22.811125454 +0000 UTC m=+0.185046441 container init c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f (image=quay.io/ceph/ceph:v19, name=determined_edison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:38:22 compute-0 podman[74195]: 2026-01-26 09:38:22.820222872 +0000 UTC m=+0.194143849 container start c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f (image=quay.io/ceph/ceph:v19, name=determined_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:22 compute-0 podman[74195]: 2026-01-26 09:38:22.82380462 +0000 UTC m=+0.197725617 container attach c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f (image=quay.io/ceph/ceph:v19, name=determined_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:23 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 26 09:38:23 compute-0 ceph-mon[74102]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4076828442' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 09:38:23 compute-0 ceph-mon[74102]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4076828442' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 09:38:23 compute-0 determined_edison[74212]: 
Jan 26 09:38:23 compute-0 determined_edison[74212]: [global]
Jan 26 09:38:23 compute-0 determined_edison[74212]:         fsid = 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:23 compute-0 determined_edison[74212]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 26 09:38:23 compute-0 systemd[1]: libpod-c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f.scope: Deactivated successfully.
Jan 26 09:38:23 compute-0 podman[74195]: 2026-01-26 09:38:23.212549775 +0000 UTC m=+0.586470782 container died c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f (image=quay.io/ceph/ceph:v19, name=determined_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 09:38:23 compute-0 ceph-mon[74102]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 09:38:23 compute-0 ceph-mon[74102]: monmap epoch 1
Jan 26 09:38:23 compute-0 ceph-mon[74102]: fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:23 compute-0 ceph-mon[74102]: last_changed 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:23 compute-0 ceph-mon[74102]: created 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:23 compute-0 ceph-mon[74102]: min_mon_release 19 (squid)
Jan 26 09:38:23 compute-0 ceph-mon[74102]: election_strategy: 1
Jan 26 09:38:23 compute-0 ceph-mon[74102]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:38:23 compute-0 ceph-mon[74102]: fsmap 
Jan 26 09:38:23 compute-0 ceph-mon[74102]: osdmap e1: 0 total, 0 up, 0 in
Jan 26 09:38:23 compute-0 ceph-mon[74102]: mgrmap e1: no daemons active
Jan 26 09:38:23 compute-0 ceph-mon[74102]: from='client.? 192.168.122.100:0/3681539675' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 09:38:23 compute-0 sshd-session[74237]: Invalid user admin from 157.245.76.178 port 45146
Jan 26 09:38:23 compute-0 sshd-session[74237]: Connection closed by invalid user admin 157.245.76.178 port 45146 [preauth]
Jan 26 09:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f737c37f968b29938f93cd4a6051387aee516b049227c4b6e355a4d88684a5e-merged.mount: Deactivated successfully.
Jan 26 09:38:24 compute-0 podman[74195]: 2026-01-26 09:38:24.230702524 +0000 UTC m=+1.604623541 container remove c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f (image=quay.io/ceph/ceph:v19, name=determined_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:24 compute-0 systemd[1]: libpod-conmon-c7d94958df1d3c99cb899f0a3ddda2a067409c9b1cc9a3a42af9b9083c03854f.scope: Deactivated successfully.
Jan 26 09:38:24 compute-0 podman[74251]: 2026-01-26 09:38:24.367051255 +0000 UTC m=+0.111113443 container create cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a (image=quay.io/ceph/ceph:v19, name=nostalgic_yalow, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:24 compute-0 ceph-mon[74102]: from='client.? 192.168.122.100:0/4076828442' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 09:38:24 compute-0 ceph-mon[74102]: from='client.? 192.168.122.100:0/4076828442' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 09:38:24 compute-0 podman[74251]: 2026-01-26 09:38:24.284636796 +0000 UTC m=+0.028699044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:24 compute-0 systemd[1]: Started libpod-conmon-cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a.scope.
Jan 26 09:38:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd9f23d6786d93b62910ae0a9c52afb9f14fdb51cf77ca1db3df2a39eb0afe5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd9f23d6786d93b62910ae0a9c52afb9f14fdb51cf77ca1db3df2a39eb0afe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd9f23d6786d93b62910ae0a9c52afb9f14fdb51cf77ca1db3df2a39eb0afe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd9f23d6786d93b62910ae0a9c52afb9f14fdb51cf77ca1db3df2a39eb0afe5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:24 compute-0 podman[74251]: 2026-01-26 09:38:24.566750873 +0000 UTC m=+0.310813051 container init cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a (image=quay.io/ceph/ceph:v19, name=nostalgic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:38:24 compute-0 podman[74251]: 2026-01-26 09:38:24.57284002 +0000 UTC m=+0.316902178 container start cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a (image=quay.io/ceph/ceph:v19, name=nostalgic_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:24 compute-0 podman[74251]: 2026-01-26 09:38:24.576459118 +0000 UTC m=+0.320521276 container attach cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a (image=quay.io/ceph/ceph:v19, name=nostalgic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:24 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:38:24 compute-0 ceph-mon[74102]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847614297' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:38:24 compute-0 systemd[1]: libpod-cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a.scope: Deactivated successfully.
Jan 26 09:38:24 compute-0 podman[74251]: 2026-01-26 09:38:24.767408498 +0000 UTC m=+0.511470656 container died cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a (image=quay.io/ceph/ceph:v19, name=nostalgic_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 09:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bd9f23d6786d93b62910ae0a9c52afb9f14fdb51cf77ca1db3df2a39eb0afe5-merged.mount: Deactivated successfully.
Jan 26 09:38:25 compute-0 podman[74251]: 2026-01-26 09:38:25.325037422 +0000 UTC m=+1.069099620 container remove cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a (image=quay.io/ceph/ceph:v19, name=nostalgic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:38:25 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:38:25 compute-0 systemd[1]: libpod-conmon-cb9f570dd44b3bc00536c204caa15d9bc3a7462dbdb0871597f871adeee3ca3a.scope: Deactivated successfully.
Jan 26 09:38:25 compute-0 ceph-mon[74102]: from='client.? 192.168.122.100:0/847614297' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:38:25 compute-0 ceph-mon[74102]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 26 09:38:25 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 26 09:38:25 compute-0 ceph-mon[74102]: mon.compute-0@0(leader) e1 shutdown
Jan 26 09:38:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0[74098]: 2026-01-26T09:38:25.547+0000 7f30a1fc0640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 26 09:38:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0[74098]: 2026-01-26T09:38:25.547+0000 7f30a1fc0640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 26 09:38:25 compute-0 ceph-mon[74102]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 26 09:38:25 compute-0 ceph-mon[74102]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 26 09:38:25 compute-0 podman[74335]: 2026-01-26 09:38:25.770690411 +0000 UTC m=+0.275783345 container died c8d20851ea0c1a6362a12a9da680f70b4e002198eb74813bb384763ba54acdf9 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-63211c680a984be17c538bfd6c15070c1f303424c639bceb664eb00659e47cab-merged.mount: Deactivated successfully.
Jan 26 09:38:26 compute-0 podman[74335]: 2026-01-26 09:38:26.163361535 +0000 UTC m=+0.668454469 container remove c8d20851ea0c1a6362a12a9da680f70b4e002198eb74813bb384763ba54acdf9 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:26 compute-0 bash[74335]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0
Jan 26 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:26 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@mon.compute-0.service: Deactivated successfully.
Jan 26 09:38:26 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:38:26 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 09:38:26 compute-0 podman[74437]: 2026-01-26 09:38:26.552175773 +0000 UTC m=+0.054533879 container create 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:26 compute-0 podman[74437]: 2026-01-26 09:38:26.526627056 +0000 UTC m=+0.028985262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e88a8d71c386519658e2b13cd0b391326e2717aec07071909c888f2314f12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e88a8d71c386519658e2b13cd0b391326e2717aec07071909c888f2314f12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e88a8d71c386519658e2b13cd0b391326e2717aec07071909c888f2314f12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190e88a8d71c386519658e2b13cd0b391326e2717aec07071909c888f2314f12/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:26 compute-0 podman[74437]: 2026-01-26 09:38:26.73495568 +0000 UTC m=+0.237313806 container init 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:38:26 compute-0 podman[74437]: 2026-01-26 09:38:26.747448981 +0000 UTC m=+0.249807077 container start 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 09:38:26 compute-0 ceph-mon[74456]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 09:38:26 compute-0 ceph-mon[74456]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 26 09:38:26 compute-0 ceph-mon[74456]: pidfile_write: ignore empty --pid-file
Jan 26 09:38:26 compute-0 ceph-mon[74456]: load: jerasure load: lrc 
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: RocksDB version: 7.9.2
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Git sha 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: DB SUMMARY
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: DB Session ID:  4MS8UCW9WHMM6ZPZ0YHT
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: CURRENT file:  CURRENT
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60443 ; 
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                         Options.error_if_exists: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                       Options.create_if_missing: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                                     Options.env: 0x55a9cbc4ec20
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                                Options.info_log: 0x55a9cd677ac0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                              Options.statistics: (nil)
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                               Options.use_fsync: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                              Options.db_log_dir: 
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                                 Options.wal_dir: 
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                    Options.write_buffer_manager: 0x55a9cd67b900
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.unordered_write: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                               Options.row_cache: None
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                              Options.wal_filter: None
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.two_write_queues: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.wal_compression: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.atomic_flush: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.max_background_jobs: 2
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.max_background_compactions: -1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.max_subcompactions: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.max_total_wal_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                          Options.max_open_files: -1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:       Options.compaction_readahead_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Compression algorithms supported:
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kZSTD supported: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kXpressCompression supported: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kBZip2Compression supported: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kLZ4Compression supported: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kZlibCompression supported: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kLZ4HCCompression supported: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         kSnappyCompression supported: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:           Options.merge_operator: 
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:        Options.compaction_filter: None
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a9cd676aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a9cd69b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:        Options.write_buffer_size: 33554432
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:  Options.max_write_buffer_number: 2
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:          Options.compression: NoCompression
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.num_levels: 7
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 61a73b27-20ff-4d9e-babd-7b87c9b5b4e0
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420306808014, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 26 09:38:26 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420307067094, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 150, "table_properties": {"data_size": 58398, "index_size": 187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3267, "raw_average_key_size": 29, "raw_value_size": 55816, "raw_average_value_size": 512, "num_data_blocks": 9, "num_entries": 109, "num_filter_entries": 109, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420306, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420307067413, "job": 1, "event": "recovery_finished"}
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 26 09:38:27 compute-0 bash[74437]: 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a
Jan 26 09:38:27 compute-0 systemd[1]: Started Ceph mon.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a9cd69ce00
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: DB pointer 0x55a9cd7a6000
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 09:38:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.44 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                            Sum      2/0   60.44 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a9cd69b350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 09:38:27 compute-0 ceph-mon[74456]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???) e1 preinit fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???).mds e1 new map
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-26T09:38:21:975599+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 26 09:38:27 compute-0 ceph-mon[74456]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : last_changed 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : created 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 26 09:38:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 26 09:38:27 compute-0 podman[74478]: 2026-01-26 09:38:27.211753819 +0000 UTC m=+0.086910733 container create ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b (image=quay.io/ceph/ceph:v19, name=hopeful_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:38:27 compute-0 systemd[1]: Started libpod-conmon-ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b.scope.
Jan 26 09:38:27 compute-0 podman[74478]: 2026-01-26 09:38:27.168412637 +0000 UTC m=+0.043569551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 09:38:27 compute-0 ceph-mon[74456]: monmap epoch 1
Jan 26 09:38:27 compute-0 ceph-mon[74456]: fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:27 compute-0 ceph-mon[74456]: last_changed 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:27 compute-0 ceph-mon[74456]: created 2026-01-26T09:38:19.068625+0000
Jan 26 09:38:27 compute-0 ceph-mon[74456]: min_mon_release 19 (squid)
Jan 26 09:38:27 compute-0 ceph-mon[74456]: election_strategy: 1
Jan 26 09:38:27 compute-0 ceph-mon[74456]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:38:27 compute-0 ceph-mon[74456]: fsmap 
Jan 26 09:38:27 compute-0 ceph-mon[74456]: osdmap e1: 0 total, 0 up, 0 in
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mgrmap e1: no daemons active
Jan 26 09:38:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b0fecbde4e5cac4c985d99f95c7f2a470e5a6ee87f39bee590f77c030936a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b0fecbde4e5cac4c985d99f95c7f2a470e5a6ee87f39bee590f77c030936a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b0fecbde4e5cac4c985d99f95c7f2a470e5a6ee87f39bee590f77c030936a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:27 compute-0 podman[74478]: 2026-01-26 09:38:27.450037701 +0000 UTC m=+0.325194675 container init ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b (image=quay.io/ceph/ceph:v19, name=hopeful_wu, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:38:27 compute-0 podman[74478]: 2026-01-26 09:38:27.460875246 +0000 UTC m=+0.336032150 container start ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b (image=quay.io/ceph/ceph:v19, name=hopeful_wu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:38:27 compute-0 podman[74478]: 2026-01-26 09:38:27.476728419 +0000 UTC m=+0.351885333 container attach ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b (image=quay.io/ceph/ceph:v19, name=hopeful_wu, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 26 09:38:27 compute-0 systemd[1]: libpod-ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b.scope: Deactivated successfully.
Jan 26 09:38:27 compute-0 podman[74478]: 2026-01-26 09:38:27.719328397 +0000 UTC m=+0.594485361 container died ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b (image=quay.io/ceph/ceph:v19, name=hopeful_wu, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b0fecbde4e5cac4c985d99f95c7f2a470e5a6ee87f39bee590f77c030936a6-merged.mount: Deactivated successfully.
Jan 26 09:38:27 compute-0 podman[74478]: 2026-01-26 09:38:27.981312925 +0000 UTC m=+0.856469839 container remove ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b (image=quay.io/ceph/ceph:v19, name=hopeful_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:27 compute-0 systemd[1]: libpod-conmon-ff5d492109ee3f4f60ec5fd7ed9ba5be1ae2baf4220b10e0be04d88d5ff93f1b.scope: Deactivated successfully.
Jan 26 09:38:28 compute-0 podman[74552]: 2026-01-26 09:38:28.041633821 +0000 UTC m=+0.028592581 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:28 compute-0 podman[74552]: 2026-01-26 09:38:28.300570995 +0000 UTC m=+0.287529695 container create f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01 (image=quay.io/ceph/ceph:v19, name=clever_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 09:38:28 compute-0 systemd[1]: Started libpod-conmon-f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01.scope.
Jan 26 09:38:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cb2a0d7fcf08c6048a6a56a632465e03a78c1622fd76546ca637b02064fb10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cb2a0d7fcf08c6048a6a56a632465e03a78c1622fd76546ca637b02064fb10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cb2a0d7fcf08c6048a6a56a632465e03a78c1622fd76546ca637b02064fb10/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:28 compute-0 podman[74552]: 2026-01-26 09:38:28.634810235 +0000 UTC m=+0.621768985 container init f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01 (image=quay.io/ceph/ceph:v19, name=clever_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:38:28 compute-0 podman[74552]: 2026-01-26 09:38:28.646587956 +0000 UTC m=+0.633546696 container start f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01 (image=quay.io/ceph/ceph:v19, name=clever_elgamal, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:28 compute-0 podman[74552]: 2026-01-26 09:38:28.679832373 +0000 UTC m=+0.666791113 container attach f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01 (image=quay.io/ceph/ceph:v19, name=clever_elgamal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 26 09:38:28 compute-0 systemd[1]: libpod-f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01.scope: Deactivated successfully.
Jan 26 09:38:28 compute-0 podman[74552]: 2026-01-26 09:38:28.888003583 +0000 UTC m=+0.874962283 container died f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01 (image=quay.io/ceph/ceph:v19, name=clever_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-70cb2a0d7fcf08c6048a6a56a632465e03a78c1622fd76546ca637b02064fb10-merged.mount: Deactivated successfully.
Jan 26 09:38:28 compute-0 podman[74552]: 2026-01-26 09:38:28.9469142 +0000 UTC m=+0.933872900 container remove f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01 (image=quay.io/ceph/ceph:v19, name=clever_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:28 compute-0 systemd[1]: libpod-conmon-f1679d22403126a6944fcdf7841a85a5435f43c38b80f11f98097d4d08ad2b01.scope: Deactivated successfully.
Jan 26 09:38:29 compute-0 systemd[1]: Reloading.
Jan 26 09:38:29 compute-0 systemd-rc-local-generator[74635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:38:29 compute-0 systemd-sysv-generator[74640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:38:29 compute-0 systemd[1]: Reloading.
Jan 26 09:38:29 compute-0 systemd-sysv-generator[74680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:38:29 compute-0 systemd-rc-local-generator[74676]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:38:29 compute-0 systemd[1]: Starting Ceph mgr.compute-0.zllcia for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:38:30 compute-0 podman[74735]: 2026-01-26 09:38:30.17845271 +0000 UTC m=+0.026156224 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:30 compute-0 podman[74735]: 2026-01-26 09:38:30.368696111 +0000 UTC m=+0.216399625 container create 0a039908c861a4ea301f9c41aa5e6344bf0f8ce564595e34939cd3781afb8f39 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b232ac1971906ef5300db355ad9e127ae14f3114e0a2bbdc502d253d452847/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b232ac1971906ef5300db355ad9e127ae14f3114e0a2bbdc502d253d452847/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b232ac1971906ef5300db355ad9e127ae14f3114e0a2bbdc502d253d452847/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b232ac1971906ef5300db355ad9e127ae14f3114e0a2bbdc502d253d452847/merged/var/lib/ceph/mgr/ceph-compute-0.zllcia supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:30 compute-0 podman[74735]: 2026-01-26 09:38:30.521091347 +0000 UTC m=+0.368794851 container init 0a039908c861a4ea301f9c41aa5e6344bf0f8ce564595e34939cd3781afb8f39 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:38:30 compute-0 podman[74735]: 2026-01-26 09:38:30.526114225 +0000 UTC m=+0.373817739 container start 0a039908c861a4ea301f9c41aa5e6344bf0f8ce564595e34939cd3781afb8f39 (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: pidfile_write: ignore empty --pid-file
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'alerts'
Jan 26 09:38:30 compute-0 bash[74735]: 0a039908c861a4ea301f9c41aa5e6344bf0f8ce564595e34939cd3781afb8f39
Jan 26 09:38:30 compute-0 systemd[1]: Started Ceph mgr.compute-0.zllcia for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'balancer'
Jan 26 09:38:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:30.689+0000 7ffb4f780140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:38:30 compute-0 podman[74776]: 2026-01-26 09:38:30.734511132 +0000 UTC m=+0.066263109 container create 14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f (image=quay.io/ceph/ceph:v19, name=determined_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:38:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'cephadm'
Jan 26 09:38:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:30.768+0000 7ffb4f780140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:38:30 compute-0 podman[74776]: 2026-01-26 09:38:30.706833097 +0000 UTC m=+0.038585094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:30 compute-0 systemd[1]: Started libpod-conmon-14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f.scope.
Jan 26 09:38:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ae08ee387cd749e105c6e75bfae6255fbd784eddbb0ce59318406ff1b60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ae08ee387cd749e105c6e75bfae6255fbd784eddbb0ce59318406ff1b60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb028ae08ee387cd749e105c6e75bfae6255fbd784eddbb0ce59318406ff1b60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:31 compute-0 podman[74776]: 2026-01-26 09:38:31.074954051 +0000 UTC m=+0.406706078 container init 14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f (image=quay.io/ceph/ceph:v19, name=determined_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 09:38:31 compute-0 podman[74776]: 2026-01-26 09:38:31.087642577 +0000 UTC m=+0.419394534 container start 14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f (image=quay.io/ceph/ceph:v19, name=determined_gould, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:38:31 compute-0 podman[74776]: 2026-01-26 09:38:31.096867758 +0000 UTC m=+0.428619715 container attach 14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f (image=quay.io/ceph/ceph:v19, name=determined_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:38:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 26 09:38:31 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668349834' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:31 compute-0 determined_gould[74793]: 
Jan 26 09:38:31 compute-0 determined_gould[74793]: {
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "health": {
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "status": "HEALTH_OK",
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "checks": {},
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "mutes": []
Jan 26 09:38:31 compute-0 determined_gould[74793]:     },
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "election_epoch": 5,
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "quorum": [
Jan 26 09:38:31 compute-0 determined_gould[74793]:         0
Jan 26 09:38:31 compute-0 determined_gould[74793]:     ],
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "quorum_names": [
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "compute-0"
Jan 26 09:38:31 compute-0 determined_gould[74793]:     ],
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "quorum_age": 4,
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "monmap": {
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "epoch": 1,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "min_mon_release_name": "squid",
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_mons": 1
Jan 26 09:38:31 compute-0 determined_gould[74793]:     },
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "osdmap": {
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "epoch": 1,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_osds": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_up_osds": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "osd_up_since": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_in_osds": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "osd_in_since": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_remapped_pgs": 0
Jan 26 09:38:31 compute-0 determined_gould[74793]:     },
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "pgmap": {
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "pgs_by_state": [],
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_pgs": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_pools": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_objects": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "data_bytes": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "bytes_used": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "bytes_avail": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "bytes_total": 0
Jan 26 09:38:31 compute-0 determined_gould[74793]:     },
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "fsmap": {
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "epoch": 1,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "btime": "2026-01-26T09:38:21:975599+0000",
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "by_rank": [],
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "up:standby": 0
Jan 26 09:38:31 compute-0 determined_gould[74793]:     },
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "mgrmap": {
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "available": false,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "num_standbys": 0,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "modules": [
Jan 26 09:38:31 compute-0 determined_gould[74793]:             "iostat",
Jan 26 09:38:31 compute-0 determined_gould[74793]:             "nfs",
Jan 26 09:38:31 compute-0 determined_gould[74793]:             "restful"
Jan 26 09:38:31 compute-0 determined_gould[74793]:         ],
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "services": {}
Jan 26 09:38:31 compute-0 determined_gould[74793]:     },
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "servicemap": {
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "epoch": 1,
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "modified": "2026-01-26T09:38:22.027389+0000",
Jan 26 09:38:31 compute-0 determined_gould[74793]:         "services": {}
Jan 26 09:38:31 compute-0 determined_gould[74793]:     },
Jan 26 09:38:31 compute-0 determined_gould[74793]:     "progress_events": {}
Jan 26 09:38:31 compute-0 determined_gould[74793]: }
Jan 26 09:38:31 compute-0 systemd[1]: libpod-14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f.scope: Deactivated successfully.
Jan 26 09:38:31 compute-0 podman[74776]: 2026-01-26 09:38:31.281823484 +0000 UTC m=+0.613575421 container died 14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f (image=quay.io/ceph/ceph:v19, name=determined_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 09:38:31 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'crash'
Jan 26 09:38:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3668349834' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb028ae08ee387cd749e105c6e75bfae6255fbd784eddbb0ce59318406ff1b60-merged.mount: Deactivated successfully.
Jan 26 09:38:31 compute-0 ceph-mgr[74755]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:38:31 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'dashboard'
Jan 26 09:38:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:31.571+0000 7ffb4f780140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:38:31 compute-0 podman[74776]: 2026-01-26 09:38:31.690316079 +0000 UTC m=+1.022068016 container remove 14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f (image=quay.io/ceph/ceph:v19, name=determined_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:38:31 compute-0 systemd[1]: libpod-conmon-14060b15ef7032fddcf313734d72fc2cf04c39b74710f951b537e116cb73c43f.scope: Deactivated successfully.
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'devicehealth'
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 09:38:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:32.180+0000 7ffb4f780140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 09:38:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 09:38:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   from numpy import show_config as show_numpy_config
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:32.335+0000 7ffb4f780140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'influx'
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'insights'
Jan 26 09:38:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:32.408+0000 7ffb4f780140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'iostat'
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'k8sevents'
Jan 26 09:38:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:32.546+0000 7ffb4f780140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:38:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'localpool'
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mirroring'
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'nfs'
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'orchestrator'
Jan 26 09:38:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:33.520+0000 7ffb4f780140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 09:38:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:33.723+0000 7ffb4f780140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 podman[74842]: 2026-01-26 09:38:33.770223677 +0000 UTC m=+0.053260184 container create d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756 (image=quay.io/ceph/ceph:v19, name=wonderful_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_support'
Jan 26 09:38:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:33.808+0000 7ffb4f780140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 systemd[1]: Started libpod-conmon-d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756.scope.
Jan 26 09:38:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f659b12becf01e079882c2ace63e85467dea4873dd95267dca88aa104f2ec17/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f659b12becf01e079882c2ace63e85467dea4873dd95267dca88aa104f2ec17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f659b12becf01e079882c2ace63e85467dea4873dd95267dca88aa104f2ec17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:33 compute-0 podman[74842]: 2026-01-26 09:38:33.741124173 +0000 UTC m=+0.024160680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:33 compute-0 podman[74842]: 2026-01-26 09:38:33.854348552 +0000 UTC m=+0.137385049 container init d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756 (image=quay.io/ceph/ceph:v19, name=wonderful_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:33 compute-0 podman[74842]: 2026-01-26 09:38:33.860246453 +0000 UTC m=+0.143282930 container start d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756 (image=quay.io/ceph/ceph:v19, name=wonderful_napier, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:33 compute-0 podman[74842]: 2026-01-26 09:38:33.875940951 +0000 UTC m=+0.158977448 container attach d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756 (image=quay.io/ceph/ceph:v19, name=wonderful_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 09:38:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:33.877+0000 7ffb4f780140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:38:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'progress'
Jan 26 09:38:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:33.963+0000 7ffb4f780140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'prometheus'
Jan 26 09:38:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:34.035+0000 7ffb4f780140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 26 09:38:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664884832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:34 compute-0 wonderful_napier[74859]: 
Jan 26 09:38:34 compute-0 wonderful_napier[74859]: {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "health": {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "status": "HEALTH_OK",
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "checks": {},
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "mutes": []
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     },
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "election_epoch": 5,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "quorum": [
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         0
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     ],
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "quorum_names": [
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "compute-0"
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     ],
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "quorum_age": 6,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "monmap": {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "epoch": 1,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "min_mon_release_name": "squid",
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_mons": 1
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     },
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "osdmap": {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "epoch": 1,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_osds": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_up_osds": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "osd_up_since": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_in_osds": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "osd_in_since": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_remapped_pgs": 0
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     },
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "pgmap": {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "pgs_by_state": [],
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_pgs": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_pools": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_objects": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "data_bytes": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "bytes_used": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "bytes_avail": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "bytes_total": 0
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     },
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "fsmap": {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "epoch": 1,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "btime": "2026-01-26T09:38:21:975599+0000",
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "by_rank": [],
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "up:standby": 0
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     },
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "mgrmap": {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "available": false,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "num_standbys": 0,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "modules": [
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:             "iostat",
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:             "nfs",
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:             "restful"
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         ],
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "services": {}
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     },
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "servicemap": {
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "epoch": 1,
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "modified": "2026-01-26T09:38:22.027389+0000",
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:         "services": {}
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     },
Jan 26 09:38:34 compute-0 wonderful_napier[74859]:     "progress_events": {}
Jan 26 09:38:34 compute-0 wonderful_napier[74859]: }
Jan 26 09:38:34 compute-0 systemd[1]: libpod-d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756.scope: Deactivated successfully.
Jan 26 09:38:34 compute-0 podman[74842]: 2026-01-26 09:38:34.062005338 +0000 UTC m=+0.345041845 container died d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756 (image=quay.io/ceph/ceph:v19, name=wonderful_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f659b12becf01e079882c2ace63e85467dea4873dd95267dca88aa104f2ec17-merged.mount: Deactivated successfully.
Jan 26 09:38:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/664884832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:34 compute-0 podman[74842]: 2026-01-26 09:38:34.107459578 +0000 UTC m=+0.390496085 container remove d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756 (image=quay.io/ceph/ceph:v19, name=wonderful_napier, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:34 compute-0 systemd[1]: libpod-conmon-d7f8ae5905088618212b988e79d400958a1f8e64890df498a4c20ffd4d1d7756.scope: Deactivated successfully.
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rbd_support'
Jan 26 09:38:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:34.381+0000 7ffb4f780140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'restful'
Jan 26 09:38:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:34.476+0000 7ffb4f780140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rgw'
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:38:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rook'
Jan 26 09:38:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:34.895+0000 7ffb4f780140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'selftest'
Jan 26 09:38:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:35.439+0000 7ffb4f780140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'snap_schedule'
Jan 26 09:38:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:35.531+0000 7ffb4f780140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'stats'
Jan 26 09:38:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:35.611+0000 7ffb4f780140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'status'
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telegraf'
Jan 26 09:38:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:35.755+0000 7ffb4f780140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telemetry'
Jan 26 09:38:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:35.823+0000 7ffb4f780140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:38:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 09:38:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:35.974+0000 7ffb4f780140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:38:36 compute-0 podman[74896]: 2026-01-26 09:38:36.163131234 +0000 UTC m=+0.034801020 container create a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52 (image=quay.io/ceph/ceph:v19, name=distracted_snyder, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:36 compute-0 systemd[1]: Started libpod-conmon-a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52.scope.
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'volumes'
Jan 26 09:38:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:36.194+0000 7ffb4f780140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5072a0ac30cf1f7778b927f419f7a34c05ded6e286b6b4150043f38b57644bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5072a0ac30cf1f7778b927f419f7a34c05ded6e286b6b4150043f38b57644bc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5072a0ac30cf1f7778b927f419f7a34c05ded6e286b6b4150043f38b57644bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:36 compute-0 podman[74896]: 2026-01-26 09:38:36.24254303 +0000 UTC m=+0.114212826 container init a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52 (image=quay.io/ceph/ceph:v19, name=distracted_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Jan 26 09:38:36 compute-0 podman[74896]: 2026-01-26 09:38:36.147276921 +0000 UTC m=+0.018946727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:36 compute-0 podman[74896]: 2026-01-26 09:38:36.248208015 +0000 UTC m=+0.119877791 container start a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52 (image=quay.io/ceph/ceph:v19, name=distracted_snyder, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 09:38:36 compute-0 podman[74896]: 2026-01-26 09:38:36.266221796 +0000 UTC m=+0.137891602 container attach a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52 (image=quay.io/ceph/ceph:v19, name=distracted_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3273145474' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:36 compute-0 distracted_snyder[74913]: 
Jan 26 09:38:36 compute-0 distracted_snyder[74913]: {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "health": {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "status": "HEALTH_OK",
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "checks": {},
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "mutes": []
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     },
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "election_epoch": 5,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "quorum": [
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         0
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     ],
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "quorum_names": [
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "compute-0"
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     ],
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "quorum_age": 9,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "monmap": {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "epoch": 1,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "min_mon_release_name": "squid",
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_mons": 1
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     },
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "osdmap": {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "epoch": 1,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_osds": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_up_osds": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "osd_up_since": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_in_osds": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "osd_in_since": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_remapped_pgs": 0
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     },
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "pgmap": {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "pgs_by_state": [],
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_pgs": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_pools": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_objects": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "data_bytes": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "bytes_used": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "bytes_avail": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "bytes_total": 0
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     },
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "fsmap": {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "epoch": 1,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "btime": "2026-01-26T09:38:21:975599+0000",
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "by_rank": [],
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "up:standby": 0
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     },
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "mgrmap": {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "available": false,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "num_standbys": 0,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "modules": [
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:             "iostat",
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:             "nfs",
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:             "restful"
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         ],
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "services": {}
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     },
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "servicemap": {
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "epoch": 1,
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "modified": "2026-01-26T09:38:22.027389+0000",
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:         "services": {}
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     },
Jan 26 09:38:36 compute-0 distracted_snyder[74913]:     "progress_events": {}
Jan 26 09:38:36 compute-0 distracted_snyder[74913]: }
Jan 26 09:38:36 compute-0 systemd[1]: libpod-a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52.scope: Deactivated successfully.
Jan 26 09:38:36 compute-0 podman[74896]: 2026-01-26 09:38:36.439303419 +0000 UTC m=+0.310973285 container died a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52 (image=quay.io/ceph/ceph:v19, name=distracted_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:38:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:36.464+0000 7ffb4f780140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'zabbix'
Jan 26 09:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5072a0ac30cf1f7778b927f419f7a34c05ded6e286b6b4150043f38b57644bc-merged.mount: Deactivated successfully.
Jan 26 09:38:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3273145474' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:36 compute-0 podman[74896]: 2026-01-26 09:38:36.512312851 +0000 UTC m=+0.383982677 container remove a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52 (image=quay.io/ceph/ceph:v19, name=distracted_snyder, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:38:36 compute-0 systemd[1]: libpod-conmon-a6b9efdcc8231f28ad35fa8242c6bc0958efcce805e3523f91eb6ebea933bd52.scope: Deactivated successfully.
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:38:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:36.534+0000 7ffb4f780140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: ms_deliver_dispatch: unhandled message 0x556030b489c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zllcia
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr handle_mgr_map Activating!
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr handle_mgr_map I am now activating
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.zllcia(active, starting, since 0.0121416s)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e1 all = 1
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: balancer
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Starting
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: crash
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Manager daemon compute-0.zllcia is now available
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: devicehealth
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Starting
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:38:36
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [balancer INFO root] No pools available
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: iostat
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: nfs
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: orchestrator
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: pg_autoscaler
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: progress
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [progress INFO root] Loading...
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [progress INFO root] No stored events to load
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded [] historic events
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded OSDMap, ready.
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] recovery thread starting
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] starting setup
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: rbd_support
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: restful
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: status
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: telemetry
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [restful INFO root] server_addr: :: server_port: 8003
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [restful WARNING root] server not running: no certificate configured
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] PerfHandler: starting
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TaskHandler: starting
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"} v 0)
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] setup complete
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 26 09:38:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: volumes
Jan 26 09:38:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:37 compute-0 ceph-mon[74456]: Activating manager daemon compute-0.zllcia
Jan 26 09:38:37 compute-0 ceph-mon[74456]: mgrmap e2: compute-0.zllcia(active, starting, since 0.0121416s)
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:38:37 compute-0 ceph-mon[74456]: Manager daemon compute-0.zllcia is now available
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:37 compute-0 ceph-mon[74456]: from='mgr.14102 192.168.122.100:0/1542211907' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.zllcia(active, since 1.0277s)
Jan 26 09:38:38 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:38 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.zllcia(active, since 2s)
Jan 26 09:38:38 compute-0 podman[75033]: 2026-01-26 09:38:38.571378359 +0000 UTC m=+0.037102113 container create 9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a (image=quay.io/ceph/ceph:v19, name=silly_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:38:38 compute-0 ceph-mon[74456]: mgrmap e3: compute-0.zllcia(active, since 1.0277s)
Jan 26 09:38:38 compute-0 systemd[1]: Started libpod-conmon-9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a.scope.
Jan 26 09:38:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e21c7d20b3ef7fdb4d07da1aa41654ca9bbafb415204bde5746a81b0541b632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e21c7d20b3ef7fdb4d07da1aa41654ca9bbafb415204bde5746a81b0541b632/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e21c7d20b3ef7fdb4d07da1aa41654ca9bbafb415204bde5746a81b0541b632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:38 compute-0 podman[75033]: 2026-01-26 09:38:38.554783236 +0000 UTC m=+0.020507000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:38 compute-0 podman[75033]: 2026-01-26 09:38:38.665808915 +0000 UTC m=+0.131532639 container init 9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a (image=quay.io/ceph/ceph:v19, name=silly_lovelace, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:38:38 compute-0 podman[75033]: 2026-01-26 09:38:38.671240104 +0000 UTC m=+0.136963818 container start 9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a (image=quay.io/ceph/ceph:v19, name=silly_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:38 compute-0 podman[75033]: 2026-01-26 09:38:38.674701668 +0000 UTC m=+0.140425402 container attach 9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a (image=quay.io/ceph/ceph:v19, name=silly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 26 09:38:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874826030' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:39 compute-0 silly_lovelace[75049]: 
Jan 26 09:38:39 compute-0 silly_lovelace[75049]: {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "health": {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "status": "HEALTH_OK",
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "checks": {},
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "mutes": []
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     },
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "election_epoch": 5,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "quorum": [
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         0
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     ],
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "quorum_names": [
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "compute-0"
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     ],
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "quorum_age": 11,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "monmap": {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "epoch": 1,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "min_mon_release_name": "squid",
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_mons": 1
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     },
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "osdmap": {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "epoch": 1,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_osds": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_up_osds": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "osd_up_since": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_in_osds": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "osd_in_since": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_remapped_pgs": 0
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     },
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "pgmap": {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "pgs_by_state": [],
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_pgs": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_pools": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_objects": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "data_bytes": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "bytes_used": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "bytes_avail": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "bytes_total": 0
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     },
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "fsmap": {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "epoch": 1,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "btime": "2026-01-26T09:38:21:975599+0000",
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "by_rank": [],
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "up:standby": 0
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     },
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "mgrmap": {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "available": true,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "num_standbys": 0,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "modules": [
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:             "iostat",
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:             "nfs",
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:             "restful"
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         ],
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "services": {}
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     },
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "servicemap": {
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "epoch": 1,
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "modified": "2026-01-26T09:38:22.027389+0000",
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:         "services": {}
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     },
Jan 26 09:38:39 compute-0 silly_lovelace[75049]:     "progress_events": {}
Jan 26 09:38:39 compute-0 silly_lovelace[75049]: }
Jan 26 09:38:39 compute-0 systemd[1]: libpod-9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a.scope: Deactivated successfully.
Jan 26 09:38:39 compute-0 podman[75033]: 2026-01-26 09:38:39.152910565 +0000 UTC m=+0.618634279 container died 9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a (image=quay.io/ceph/ceph:v19, name=silly_lovelace, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e21c7d20b3ef7fdb4d07da1aa41654ca9bbafb415204bde5746a81b0541b632-merged.mount: Deactivated successfully.
Jan 26 09:38:39 compute-0 podman[75033]: 2026-01-26 09:38:39.286982504 +0000 UTC m=+0.752706248 container remove 9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a (image=quay.io/ceph/ceph:v19, name=silly_lovelace, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 09:38:39 compute-0 systemd[1]: libpod-conmon-9e29a8baa8f61542f0827e6efcf52a57c6578b8a51654e5e4682e09e6e36656a.scope: Deactivated successfully.
Jan 26 09:38:39 compute-0 podman[75088]: 2026-01-26 09:38:39.339942279 +0000 UTC m=+0.033510066 container create 0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a (image=quay.io/ceph/ceph:v19, name=flamboyant_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:38:39 compute-0 systemd[1]: Started libpod-conmon-0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a.scope.
Jan 26 09:38:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be143e0c6b9e5c708da88f31ac14b2a56aea6e11e37eceb6b497155bb0bf8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be143e0c6b9e5c708da88f31ac14b2a56aea6e11e37eceb6b497155bb0bf8f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be143e0c6b9e5c708da88f31ac14b2a56aea6e11e37eceb6b497155bb0bf8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0be143e0c6b9e5c708da88f31ac14b2a56aea6e11e37eceb6b497155bb0bf8f/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:39 compute-0 podman[75088]: 2026-01-26 09:38:39.413182467 +0000 UTC m=+0.106750264 container init 0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a (image=quay.io/ceph/ceph:v19, name=flamboyant_wiles, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 09:38:39 compute-0 podman[75088]: 2026-01-26 09:38:39.324623321 +0000 UTC m=+0.018191138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:39 compute-0 podman[75088]: 2026-01-26 09:38:39.425935834 +0000 UTC m=+0.119503631 container start 0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a (image=quay.io/ceph/ceph:v19, name=flamboyant_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:39 compute-0 podman[75088]: 2026-01-26 09:38:39.43348396 +0000 UTC m=+0.127051757 container attach 0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a (image=quay.io/ceph/ceph:v19, name=flamboyant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 09:38:39 compute-0 ceph-mon[74456]: mgrmap e4: compute-0.zllcia(active, since 2s)
Jan 26 09:38:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2874826030' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 09:38:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 26 09:38:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3089429089' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 09:38:39 compute-0 flamboyant_wiles[75105]: 
Jan 26 09:38:39 compute-0 flamboyant_wiles[75105]: [global]
Jan 26 09:38:39 compute-0 flamboyant_wiles[75105]:         fsid = 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:39 compute-0 flamboyant_wiles[75105]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 26 09:38:39 compute-0 systemd[1]: libpod-0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a.scope: Deactivated successfully.
Jan 26 09:38:39 compute-0 podman[75088]: 2026-01-26 09:38:39.73880201 +0000 UTC m=+0.432369807 container died 0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a (image=quay.io/ceph/ceph:v19, name=flamboyant_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0be143e0c6b9e5c708da88f31ac14b2a56aea6e11e37eceb6b497155bb0bf8f-merged.mount: Deactivated successfully.
Jan 26 09:38:39 compute-0 podman[75088]: 2026-01-26 09:38:39.769888459 +0000 UTC m=+0.463456256 container remove 0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a (image=quay.io/ceph/ceph:v19, name=flamboyant_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:39 compute-0 systemd[1]: libpod-conmon-0e45bd959c8f3c933aca81006d00cf4458a0bb62ba3643f81099b4b6c1924b6a.scope: Deactivated successfully.
Jan 26 09:38:39 compute-0 podman[75145]: 2026-01-26 09:38:39.863405319 +0000 UTC m=+0.072814737 container create 71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e (image=quay.io/ceph/ceph:v19, name=distracted_euler, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 09:38:39 compute-0 systemd[1]: Started libpod-conmon-71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e.scope.
Jan 26 09:38:39 compute-0 podman[75145]: 2026-01-26 09:38:39.816740827 +0000 UTC m=+0.026150255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca5a5b2a36a35c96902906f84d36b954fcd00a89b29f59ab33cd4ba315751fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca5a5b2a36a35c96902906f84d36b954fcd00a89b29f59ab33cd4ba315751fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca5a5b2a36a35c96902906f84d36b954fcd00a89b29f59ab33cd4ba315751fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:39 compute-0 podman[75145]: 2026-01-26 09:38:39.930690176 +0000 UTC m=+0.140099604 container init 71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e (image=quay.io/ceph/ceph:v19, name=distracted_euler, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:38:39 compute-0 podman[75145]: 2026-01-26 09:38:39.936816493 +0000 UTC m=+0.146225911 container start 71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e (image=quay.io/ceph/ceph:v19, name=distracted_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 09:38:39 compute-0 podman[75145]: 2026-01-26 09:38:39.939361653 +0000 UTC m=+0.148771071 container attach 71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e (image=quay.io/ceph/ceph:v19, name=distracted_euler, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:38:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 26 09:38:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1670884327' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3089429089' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 09:38:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1670884327' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 26 09:38:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1670884327' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  1: '-n'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  2: 'mgr.compute-0.zllcia'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  3: '-f'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  4: '--setuser'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  5: 'ceph'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  6: '--setgroup'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  7: 'ceph'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  8: '--default-log-to-file=false'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  9: '--default-log-to-journald=true'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr respawn  exe_path /proc/self/exe
Jan 26 09:38:40 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.zllcia(active, since 4s)
Jan 26 09:38:40 compute-0 systemd[1]: libpod-71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e.scope: Deactivated successfully.
Jan 26 09:38:40 compute-0 podman[75145]: 2026-01-26 09:38:40.624600628 +0000 UTC m=+0.834010036 container died 71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e (image=quay.io/ceph/ceph:v19, name=distracted_euler, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ca5a5b2a36a35c96902906f84d36b954fcd00a89b29f59ab33cd4ba315751fc-merged.mount: Deactivated successfully.
Jan 26 09:38:40 compute-0 podman[75145]: 2026-01-26 09:38:40.672055583 +0000 UTC m=+0.881464991 container remove 71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e (image=quay.io/ceph/ceph:v19, name=distracted_euler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:38:40 compute-0 systemd[1]: libpod-conmon-71470dfd4ee43d3663e9dff820e8043f1bfa935bfebd2638a49a731ed617574e.scope: Deactivated successfully.
Jan 26 09:38:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setuser ceph since I am not root
Jan 26 09:38:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setgroup ceph since I am not root
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: pidfile_write: ignore empty --pid-file
Jan 26 09:38:40 compute-0 podman[75199]: 2026-01-26 09:38:40.728573565 +0000 UTC m=+0.039798307 container create a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1 (image=quay.io/ceph/ceph:v19, name=goofy_galileo, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'alerts'
Jan 26 09:38:40 compute-0 systemd[1]: Started libpod-conmon-a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1.scope.
Jan 26 09:38:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b56fb724fc3b047e5f1c1a451b39e9263b8e0d1b82ed5d17342678b6710252/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b56fb724fc3b047e5f1c1a451b39e9263b8e0d1b82ed5d17342678b6710252/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b56fb724fc3b047e5f1c1a451b39e9263b8e0d1b82ed5d17342678b6710252/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:40 compute-0 podman[75199]: 2026-01-26 09:38:40.710427859 +0000 UTC m=+0.021652631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:40 compute-0 podman[75199]: 2026-01-26 09:38:40.807058426 +0000 UTC m=+0.118283168 container init a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1 (image=quay.io/ceph/ceph:v19, name=goofy_galileo, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:40 compute-0 podman[75199]: 2026-01-26 09:38:40.811835837 +0000 UTC m=+0.123060599 container start a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1 (image=quay.io/ceph/ceph:v19, name=goofy_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:40 compute-0 podman[75199]: 2026-01-26 09:38:40.815756944 +0000 UTC m=+0.126981706 container attach a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1 (image=quay.io/ceph/ceph:v19, name=goofy_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:38:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:40.828+0000 7ff5323b5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'balancer'
Jan 26 09:38:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:40.904+0000 7ff5323b5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:38:40 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'cephadm'
Jan 26 09:38:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 26 09:38:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906952954' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 09:38:41 compute-0 goofy_galileo[75236]: {
Jan 26 09:38:41 compute-0 goofy_galileo[75236]:     "epoch": 5,
Jan 26 09:38:41 compute-0 goofy_galileo[75236]:     "available": true,
Jan 26 09:38:41 compute-0 goofy_galileo[75236]:     "active_name": "compute-0.zllcia",
Jan 26 09:38:41 compute-0 goofy_galileo[75236]:     "num_standby": 0
Jan 26 09:38:41 compute-0 goofy_galileo[75236]: }
Jan 26 09:38:41 compute-0 systemd[1]: libpod-a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1.scope: Deactivated successfully.
Jan 26 09:38:41 compute-0 podman[75199]: 2026-01-26 09:38:41.219821518 +0000 UTC m=+0.531046260 container died a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1 (image=quay.io/ceph/ceph:v19, name=goofy_galileo, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7b56fb724fc3b047e5f1c1a451b39e9263b8e0d1b82ed5d17342678b6710252-merged.mount: Deactivated successfully.
Jan 26 09:38:41 compute-0 podman[75199]: 2026-01-26 09:38:41.549109452 +0000 UTC m=+0.860334194 container remove a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1 (image=quay.io/ceph/ceph:v19, name=goofy_galileo, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 09:38:41 compute-0 systemd[1]: libpod-conmon-a5545b9c0fe8fff12018da3e5bbb452dd44fdbc2402d79c077440cdbac4a74f1.scope: Deactivated successfully.
Jan 26 09:38:41 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'crash'
Jan 26 09:38:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1670884327' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 26 09:38:41 compute-0 ceph-mon[74456]: mgrmap e5: compute-0.zllcia(active, since 4s)
Jan 26 09:38:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/906952954' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 09:38:41 compute-0 podman[75284]: 2026-01-26 09:38:41.611938946 +0000 UTC m=+0.041853473 container create b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206 (image=quay.io/ceph/ceph:v19, name=relaxed_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:41 compute-0 systemd[1]: Started libpod-conmon-b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206.scope.
Jan 26 09:38:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:41.673+0000 7ff5323b5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:38:41 compute-0 ceph-mgr[74755]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:38:41 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'dashboard'
Jan 26 09:38:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d8555eafc4c121c717789b4b87e3803b7500af9177f54e9aec956579c9972b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d8555eafc4c121c717789b4b87e3803b7500af9177f54e9aec956579c9972b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d8555eafc4c121c717789b4b87e3803b7500af9177f54e9aec956579c9972b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:41 compute-0 podman[75284]: 2026-01-26 09:38:41.691109977 +0000 UTC m=+0.121024524 container init b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206 (image=quay.io/ceph/ceph:v19, name=relaxed_kalam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:38:41 compute-0 podman[75284]: 2026-01-26 09:38:41.5974213 +0000 UTC m=+0.027335847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:41 compute-0 podman[75284]: 2026-01-26 09:38:41.696288708 +0000 UTC m=+0.126203245 container start b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206 (image=quay.io/ceph/ceph:v19, name=relaxed_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 09:38:41 compute-0 podman[75284]: 2026-01-26 09:38:41.699801734 +0000 UTC m=+0.129716301 container attach b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206 (image=quay.io/ceph/ceph:v19, name=relaxed_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'devicehealth'
Jan 26 09:38:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:42.308+0000 7ff5323b5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 09:38:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 09:38:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 09:38:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   from numpy import show_config as show_numpy_config
Jan 26 09:38:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:42.471+0000 7ff5323b5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'influx'
Jan 26 09:38:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:42.544+0000 7ff5323b5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'insights'
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'iostat'
Jan 26 09:38:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:42.688+0000 7ff5323b5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:38:42 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'k8sevents'
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'localpool'
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mirroring'
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'nfs'
Jan 26 09:38:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:43.658+0000 7ff5323b5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'orchestrator'
Jan 26 09:38:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:43.910+0000 7ff5323b5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 09:38:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:43.985+0000 7ff5323b5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:38:43 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_support'
Jan 26 09:38:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:44.052+0000 7ff5323b5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 09:38:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:44.135+0000 7ff5323b5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'progress'
Jan 26 09:38:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:44.206+0000 7ff5323b5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'prometheus'
Jan 26 09:38:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:44.540+0000 7ff5323b5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rbd_support'
Jan 26 09:38:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:44.636+0000 7ff5323b5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'restful'
Jan 26 09:38:44 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rgw'
Jan 26 09:38:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:45.059+0000 7ff5323b5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rook'
Jan 26 09:38:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:45.593+0000 7ff5323b5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'selftest'
Jan 26 09:38:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:45.669+0000 7ff5323b5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'snap_schedule'
Jan 26 09:38:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:45.746+0000 7ff5323b5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'stats'
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'status'
Jan 26 09:38:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:45.883+0000 7ff5323b5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telegraf'
Jan 26 09:38:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:45.948+0000 7ff5323b5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:38:45 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telemetry'
Jan 26 09:38:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:46.094+0000 7ff5323b5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 09:38:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:46.300+0000 7ff5323b5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'volumes'
Jan 26 09:38:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:46.545+0000 7ff5323b5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'zabbix'
Jan 26 09:38:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:38:46.610+0000 7ff5323b5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zllcia restarted
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zllcia
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: ms_deliver_dispatch: unhandled message 0x55b58b54cd00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr handle_mgr_map Activating!
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.zllcia(active, starting, since 0.0533552s)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr handle_mgr_map I am now activating
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e1 all = 1
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: balancer
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Manager daemon compute-0.zllcia is now available
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Starting
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:38:46
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [balancer INFO root] No pools available
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: Active manager daemon compute-0.zllcia restarted
Jan 26 09:38:46 compute-0 ceph-mon[74456]: Activating manager daemon compute-0.zllcia
Jan 26 09:38:46 compute-0 ceph-mon[74456]: osdmap e2: 0 total, 0 up, 0 in
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mgrmap e6: compute-0.zllcia(active, starting, since 0.0533552s)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mon[74456]: Manager daemon compute-0.zllcia is now available
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: cephadm
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: crash
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: devicehealth
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: iostat
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Starting
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: nfs
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: orchestrator
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: pg_autoscaler
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: progress
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [progress INFO root] Loading...
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [progress INFO root] No stored events to load
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded [] historic events
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded OSDMap, ready.
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] recovery thread starting
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] starting setup
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: rbd_support
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: restful
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [restful INFO root] server_addr: :: server_port: 8003
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: status
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [restful WARNING root] server not running: no certificate configured
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: telemetry
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] PerfHandler: starting
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TaskHandler: starting
Jan 26 09:38:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"} v 0)
Jan 26 09:38:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] setup complete
Jan 26 09:38:46 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: volumes
Jan 26 09:38:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019927080 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:38:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Jan 26 09:38:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Jan 26 09:38:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:47 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.zllcia(active, since 1.06208s)
Jan 26 09:38:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 26 09:38:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 26 09:38:47 compute-0 relaxed_kalam[75301]: {
Jan 26 09:38:47 compute-0 relaxed_kalam[75301]:     "mgrmap_epoch": 7,
Jan 26 09:38:47 compute-0 relaxed_kalam[75301]:     "initialized": true
Jan 26 09:38:47 compute-0 relaxed_kalam[75301]: }
Jan 26 09:38:47 compute-0 systemd[1]: libpod-b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206.scope: Deactivated successfully.
Jan 26 09:38:47 compute-0 ceph-mon[74456]: Found migration_current of "None". Setting to last migration.
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:47 compute-0 ceph-mon[74456]: mgrmap e7: compute-0.zllcia(active, since 1.06208s)
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 26 09:38:47 compute-0 ceph-mon[74456]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 26 09:38:47 compute-0 podman[75438]: 2026-01-26 09:38:47.733228432 +0000 UTC m=+0.021052297 container died b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206 (image=quay.io/ceph/ceph:v19, name=relaxed_kalam, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7d8555eafc4c121c717789b4b87e3803b7500af9177f54e9aec956579c9972b-merged.mount: Deactivated successfully.
Jan 26 09:38:47 compute-0 podman[75438]: 2026-01-26 09:38:47.76754388 +0000 UTC m=+0.055367725 container remove b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206 (image=quay.io/ceph/ceph:v19, name=relaxed_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 09:38:47 compute-0 systemd[1]: libpod-conmon-b7747aab8099014070556fb83b86fd3338d52cab0eb1724f803e00425a846206.scope: Deactivated successfully.
Jan 26 09:38:47 compute-0 podman[75450]: 2026-01-26 09:38:47.834068148 +0000 UTC m=+0.041064293 container create 9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577 (image=quay.io/ceph/ceph:v19, name=wizardly_keller, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:47 compute-0 systemd[1]: Started libpod-conmon-9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577.scope.
Jan 26 09:38:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770e5d39dd3cf4c38650fb15c44e039b64edb6da2595be42ff16536952937c0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770e5d39dd3cf4c38650fb15c44e039b64edb6da2595be42ff16536952937c0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/770e5d39dd3cf4c38650fb15c44e039b64edb6da2595be42ff16536952937c0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:47 compute-0 podman[75450]: 2026-01-26 09:38:47.814155994 +0000 UTC m=+0.021152169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:48 compute-0 podman[75450]: 2026-01-26 09:38:48.084897904 +0000 UTC m=+0.291894079 container init 9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577 (image=quay.io/ceph/ceph:v19, name=wizardly_keller, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:38:48 compute-0 podman[75450]: 2026-01-26 09:38:48.090908469 +0000 UTC m=+0.297904614 container start 9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577 (image=quay.io/ceph/ceph:v19, name=wizardly_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:48 compute-0 podman[75450]: 2026-01-26 09:38:48.105520368 +0000 UTC m=+0.312516523 container attach 9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577 (image=quay.io/ceph/ceph:v19, name=wizardly_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:38:48] ENGINE Bus STARTING
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:38:48] ENGINE Bus STARTING
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:38:48] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:38:48] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:38:48] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:38:48] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:38:48] ENGINE Bus STARTED
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:38:48] ENGINE Bus STARTED
Jan 26 09:38:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 26 09:38:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:38:48] ENGINE Client ('192.168.122.100', 44858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:38:48] ENGINE Client ('192.168.122.100', 44858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 26 09:38:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 26 09:38:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:48 compute-0 systemd[1]: libpod-9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577.scope: Deactivated successfully.
Jan 26 09:38:48 compute-0 podman[75450]: 2026-01-26 09:38:48.439472877 +0000 UTC m=+0.646469022 container died 9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577 (image=quay.io/ceph/ceph:v19, name=wizardly_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-770e5d39dd3cf4c38650fb15c44e039b64edb6da2595be42ff16536952937c0e-merged.mount: Deactivated successfully.
Jan 26 09:38:48 compute-0 podman[75450]: 2026-01-26 09:38:48.471423169 +0000 UTC m=+0.678419314 container remove 9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577 (image=quay.io/ceph/ceph:v19, name=wizardly_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 09:38:48 compute-0 systemd[1]: libpod-conmon-9effeba3849a5924eb7c9c809d8c84dcc9c4d9327eae3609eff7dcadf44a7577.scope: Deactivated successfully.
Jan 26 09:38:48 compute-0 podman[75527]: 2026-01-26 09:38:48.533912798 +0000 UTC m=+0.038414301 container create 392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19 (image=quay.io/ceph/ceph:v19, name=angry_lewin, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:48 compute-0 systemd[1]: Started libpod-conmon-392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19.scope.
Jan 26 09:38:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3776a9d6b430fc916feef00adf5af75bf0c7fca379a19072c530a895d60ad555/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3776a9d6b430fc916feef00adf5af75bf0c7fca379a19072c530a895d60ad555/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3776a9d6b430fc916feef00adf5af75bf0c7fca379a19072c530a895d60ad555/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:48 compute-0 podman[75527]: 2026-01-26 09:38:48.605534175 +0000 UTC m=+0.110035758 container init 392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19 (image=quay.io/ceph/ceph:v19, name=angry_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 09:38:48 compute-0 podman[75527]: 2026-01-26 09:38:48.609977786 +0000 UTC m=+0.114479279 container start 392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19 (image=quay.io/ceph/ceph:v19, name=angry_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:48 compute-0 podman[75527]: 2026-01-26 09:38:48.51463092 +0000 UTC m=+0.019132473 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:48 compute-0 podman[75527]: 2026-01-26 09:38:48.613982086 +0000 UTC m=+0.118483689 container attach 392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19 (image=quay.io/ceph/ceph:v19, name=angry_lewin, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:48 compute-0 ceph-mon[74456]: [26/Jan/2026:09:38:48] ENGINE Bus STARTING
Jan 26 09:38:48 compute-0 ceph-mon[74456]: [26/Jan/2026:09:38:48] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:38:48 compute-0 ceph-mon[74456]: [26/Jan/2026:09:38:48] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:38:48 compute-0 ceph-mon[74456]: [26/Jan/2026:09:38:48] ENGINE Bus STARTED
Jan 26 09:38:48 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:48 compute-0 ceph-mon[74456]: [26/Jan/2026:09:38:48] ENGINE Client ('192.168.122.100', 44858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:38:48 compute-0 ceph-mon[74456]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:48 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:48 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 26 09:38:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO root] Set ssh ssh_user
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 26 09:38:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 26 09:38:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO root] Set ssh ssh_config
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 26 09:38:48 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 26 09:38:48 compute-0 angry_lewin[75543]: ssh user set to ceph-admin. sudo will be used
Jan 26 09:38:48 compute-0 systemd[1]: libpod-392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19.scope: Deactivated successfully.
Jan 26 09:38:48 compute-0 podman[75527]: 2026-01-26 09:38:48.952795797 +0000 UTC m=+0.457297310 container died 392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19 (image=quay.io/ceph/ceph:v19, name=angry_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 09:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3776a9d6b430fc916feef00adf5af75bf0c7fca379a19072c530a895d60ad555-merged.mount: Deactivated successfully.
Jan 26 09:38:48 compute-0 podman[75527]: 2026-01-26 09:38:48.981842141 +0000 UTC m=+0.486343644 container remove 392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19 (image=quay.io/ceph/ceph:v19, name=angry_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:48 compute-0 systemd[1]: libpod-conmon-392ed7af49ca007e83b09778a407393bdfc3596500bd30b65e03e3444b802c19.scope: Deactivated successfully.
Jan 26 09:38:49 compute-0 podman[75580]: 2026-01-26 09:38:49.057045157 +0000 UTC m=+0.054556042 container create 0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265 (image=quay.io/ceph/ceph:v19, name=nifty_cray, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:49 compute-0 systemd[1]: Started libpod-conmon-0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265.scope.
Jan 26 09:38:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d6d7613909fc75f2bda97e44d452e4c4b5d95c9d9e0f433c425397abdfe19d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d6d7613909fc75f2bda97e44d452e4c4b5d95c9d9e0f433c425397abdfe19d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d6d7613909fc75f2bda97e44d452e4c4b5d95c9d9e0f433c425397abdfe19d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d6d7613909fc75f2bda97e44d452e4c4b5d95c9d9e0f433c425397abdfe19d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d6d7613909fc75f2bda97e44d452e4c4b5d95c9d9e0f433c425397abdfe19d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 podman[75580]: 2026-01-26 09:38:49.02971236 +0000 UTC m=+0.027223285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:49 compute-0 podman[75580]: 2026-01-26 09:38:49.133290741 +0000 UTC m=+0.130801636 container init 0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265 (image=quay.io/ceph/ceph:v19, name=nifty_cray, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:38:49 compute-0 podman[75580]: 2026-01-26 09:38:49.151110068 +0000 UTC m=+0.148620933 container start 0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265 (image=quay.io/ceph/ceph:v19, name=nifty_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:38:49 compute-0 podman[75580]: 2026-01-26 09:38:49.155348234 +0000 UTC m=+0.152859109 container attach 0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265 (image=quay.io/ceph/ceph:v19, name=nifty_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 09:38:49 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.zllcia(active, since 2s)
Jan 26 09:38:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 26 09:38:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:49 compute-0 ceph-mgr[74755]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 26 09:38:49 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 26 09:38:49 compute-0 ceph-mgr[74755]: [cephadm INFO root] Set ssh private key
Jan 26 09:38:49 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 26 09:38:49 compute-0 systemd[1]: libpod-0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265.scope: Deactivated successfully.
Jan 26 09:38:49 compute-0 podman[75580]: 2026-01-26 09:38:49.52322875 +0000 UTC m=+0.520739595 container died 0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265 (image=quay.io/ceph/ceph:v19, name=nifty_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-45d6d7613909fc75f2bda97e44d452e4c4b5d95c9d9e0f433c425397abdfe19d-merged.mount: Deactivated successfully.
Jan 26 09:38:49 compute-0 podman[75580]: 2026-01-26 09:38:49.555154132 +0000 UTC m=+0.552664977 container remove 0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265 (image=quay.io/ceph/ceph:v19, name=nifty_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:38:49 compute-0 systemd[1]: libpod-conmon-0c1d1a2b768b5a1f7e4417fd949e42f87b142169c84fe196e17d3fddcf793265.scope: Deactivated successfully.
Jan 26 09:38:49 compute-0 podman[75636]: 2026-01-26 09:38:49.613310782 +0000 UTC m=+0.039343436 container create 424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc (image=quay.io/ceph/ceph:v19, name=peaceful_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:49 compute-0 systemd[1]: Started libpod-conmon-424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc.scope.
Jan 26 09:38:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb461d3f80a731e1b7237a56971f584c0883e8b73bb2811fd0f72d53b24a2ee/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb461d3f80a731e1b7237a56971f584c0883e8b73bb2811fd0f72d53b24a2ee/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb461d3f80a731e1b7237a56971f584c0883e8b73bb2811fd0f72d53b24a2ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb461d3f80a731e1b7237a56971f584c0883e8b73bb2811fd0f72d53b24a2ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb461d3f80a731e1b7237a56971f584c0883e8b73bb2811fd0f72d53b24a2ee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:49 compute-0 podman[75636]: 2026-01-26 09:38:49.682640737 +0000 UTC m=+0.108673421 container init 424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc (image=quay.io/ceph/ceph:v19, name=peaceful_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 09:38:49 compute-0 podman[75636]: 2026-01-26 09:38:49.689405322 +0000 UTC m=+0.115437976 container start 424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc (image=quay.io/ceph/ceph:v19, name=peaceful_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:49 compute-0 podman[75636]: 2026-01-26 09:38:49.597734366 +0000 UTC m=+0.023767040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:49 compute-0 podman[75636]: 2026-01-26 09:38:49.693174665 +0000 UTC m=+0.119207339 container attach 424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc (image=quay.io/ceph/ceph:v19, name=peaceful_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 09:38:49 compute-0 ceph-mon[74456]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:49 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:49 compute-0 ceph-mon[74456]: Set ssh ssh_user
Jan 26 09:38:49 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:49 compute-0 ceph-mon[74456]: Set ssh ssh_config
Jan 26 09:38:49 compute-0 ceph-mon[74456]: ssh user set to ceph-admin. sudo will be used
Jan 26 09:38:49 compute-0 ceph-mon[74456]: mgrmap e8: compute-0.zllcia(active, since 2s)
Jan 26 09:38:49 compute-0 ceph-mon[74456]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:49 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:49 compute-0 ceph-mon[74456]: Set ssh ssh_identity_key
Jan 26 09:38:49 compute-0 ceph-mon[74456]: Set ssh private key
Jan 26 09:38:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 26 09:38:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:50 compute-0 ceph-mgr[74755]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 26 09:38:50 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 26 09:38:50 compute-0 systemd[1]: libpod-424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc.scope: Deactivated successfully.
Jan 26 09:38:50 compute-0 podman[75636]: 2026-01-26 09:38:50.018992201 +0000 UTC m=+0.445024885 container died 424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc (image=quay.io/ceph/ceph:v19, name=peaceful_bhaskara, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 09:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cb461d3f80a731e1b7237a56971f584c0883e8b73bb2811fd0f72d53b24a2ee-merged.mount: Deactivated successfully.
Jan 26 09:38:50 compute-0 podman[75636]: 2026-01-26 09:38:50.057888405 +0000 UTC m=+0.483921059 container remove 424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc (image=quay.io/ceph/ceph:v19, name=peaceful_bhaskara, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:50 compute-0 systemd[1]: libpod-conmon-424e7138f0ef0a29f0a3f7c4c8d65b3a982f48d555de740c41d735614449eecc.scope: Deactivated successfully.
Jan 26 09:38:50 compute-0 podman[75690]: 2026-01-26 09:38:50.12614493 +0000 UTC m=+0.047630353 container create df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c (image=quay.io/ceph/ceph:v19, name=vigilant_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:38:50 compute-0 systemd[1]: Started libpod-conmon-df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c.scope.
Jan 26 09:38:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:50 compute-0 podman[75690]: 2026-01-26 09:38:50.10090815 +0000 UTC m=+0.022393653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de473d8c82e97a1802dc01214e988d4a1b596459f5b70a908f3eecb2ee00131/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de473d8c82e97a1802dc01214e988d4a1b596459f5b70a908f3eecb2ee00131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de473d8c82e97a1802dc01214e988d4a1b596459f5b70a908f3eecb2ee00131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:50 compute-0 podman[75690]: 2026-01-26 09:38:50.213970941 +0000 UTC m=+0.135456434 container init df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c (image=quay.io/ceph/ceph:v19, name=vigilant_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:50 compute-0 podman[75690]: 2026-01-26 09:38:50.222584036 +0000 UTC m=+0.144069499 container start df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c (image=quay.io/ceph/ceph:v19, name=vigilant_darwin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 09:38:50 compute-0 podman[75690]: 2026-01-26 09:38:50.226407051 +0000 UTC m=+0.147892504 container attach df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c (image=quay.io/ceph/ceph:v19, name=vigilant_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 09:38:50 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:50 compute-0 vigilant_darwin[75707]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzIgvt7/fIz+3LYLkkIxUSFgVhfaqjIlarOnHYIJ/TdtAEvMoPQADPaBNcki6gVyOxzAR4QV8z/ehYIY4qXkIXXB48p6vjPgVW7eLiRnXb4sBKUo99W3nz7HmoqlJxZ8NEpTS6qQPTZN8HufAHvDfLi85AGQWXsIPsXTrYRa9YQsYEAdQkRX0ay213SF4fjzEcTueLhHJZMMrqqMcI4EWASDKv2KQ4PhaHEfQbA3nDLXTFjm5BjBk+qt60hNMz+eBpLNzO+awPegL4TTgEtIw66Org1LdHEXcnB5QKPZXFbd+325ovzMZf259pBBMcJKu7pCxEvkva3I1879sP+O8WkAgj74ltpOs2MK9CmZreQEIaLOBE4Lma7Eg66HmIM2buC9a20tmgWJARXgTQ1RY+zz2Mq+1QZ0mQE6C+6qKS8RKCpU+Xegoxxm/otud/BB90A2LCazTX7VBwI9jJhUirvglUIwSTAh9zrDTfSFlwFzh+D5iXIczGhE9cTbQv/bc= zuul@controller
Jan 26 09:38:50 compute-0 systemd[1]: libpod-df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c.scope: Deactivated successfully.
Jan 26 09:38:50 compute-0 podman[75690]: 2026-01-26 09:38:50.562031515 +0000 UTC m=+0.483516948 container died df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c (image=quay.io/ceph/ceph:v19, name=vigilant_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8de473d8c82e97a1802dc01214e988d4a1b596459f5b70a908f3eecb2ee00131-merged.mount: Deactivated successfully.
Jan 26 09:38:50 compute-0 podman[75690]: 2026-01-26 09:38:50.596582469 +0000 UTC m=+0.518067892 container remove df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c (image=quay.io/ceph/ceph:v19, name=vigilant_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:38:50 compute-0 systemd[1]: libpod-conmon-df887626dc055b1d48249a93f185175f79e8c0e8d329aabb17521ff95f74a59c.scope: Deactivated successfully.
Jan 26 09:38:50 compute-0 podman[75744]: 2026-01-26 09:38:50.650484212 +0000 UTC m=+0.037322561 container create fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d (image=quay.io/ceph/ceph:v19, name=objective_volhard, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:50 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:50 compute-0 systemd[1]: Started libpod-conmon-fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d.scope.
Jan 26 09:38:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248918d66e3216c850b72e8f5e63a6f716d1445d3113f516dd6480a1ea08ac8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248918d66e3216c850b72e8f5e63a6f716d1445d3113f516dd6480a1ea08ac8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248918d66e3216c850b72e8f5e63a6f716d1445d3113f516dd6480a1ea08ac8f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:50 compute-0 podman[75744]: 2026-01-26 09:38:50.634884556 +0000 UTC m=+0.021722935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:50 compute-0 podman[75744]: 2026-01-26 09:38:50.732665158 +0000 UTC m=+0.119503527 container init fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d (image=quay.io/ceph/ceph:v19, name=objective_volhard, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:50 compute-0 podman[75744]: 2026-01-26 09:38:50.743522375 +0000 UTC m=+0.130360744 container start fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d (image=quay.io/ceph/ceph:v19, name=objective_volhard, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:50 compute-0 podman[75744]: 2026-01-26 09:38:50.747045932 +0000 UTC m=+0.133884291 container attach fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d (image=quay.io/ceph/ceph:v19, name=objective_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 09:38:51 compute-0 ceph-mon[74456]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:51 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:51 compute-0 ceph-mon[74456]: Set ssh ssh_identity_pub
Jan 26 09:38:51 compute-0 ceph-mon[74456]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:51 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:51 compute-0 sshd-session[75787]: Accepted publickey for ceph-admin from 192.168.122.100 port 42428 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:51 compute-0 systemd-logind[787]: New session 21 of user ceph-admin.
Jan 26 09:38:51 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 26 09:38:51 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 26 09:38:51 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 26 09:38:51 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 26 09:38:51 compute-0 systemd[75791]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:51 compute-0 systemd[75791]: Queued start job for default target Main User Target.
Jan 26 09:38:51 compute-0 sshd-session[75805]: Accepted publickey for ceph-admin from 192.168.122.100 port 42434 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:51 compute-0 systemd[75791]: Created slice User Application Slice.
Jan 26 09:38:51 compute-0 systemd[75791]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 09:38:51 compute-0 systemd[75791]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 09:38:51 compute-0 systemd[75791]: Reached target Paths.
Jan 26 09:38:51 compute-0 systemd[75791]: Reached target Timers.
Jan 26 09:38:51 compute-0 systemd-logind[787]: New session 23 of user ceph-admin.
Jan 26 09:38:51 compute-0 systemd[75791]: Starting D-Bus User Message Bus Socket...
Jan 26 09:38:51 compute-0 systemd[75791]: Starting Create User's Volatile Files and Directories...
Jan 26 09:38:51 compute-0 systemd[75791]: Finished Create User's Volatile Files and Directories.
Jan 26 09:38:51 compute-0 systemd[75791]: Listening on D-Bus User Message Bus Socket.
Jan 26 09:38:51 compute-0 systemd[75791]: Reached target Sockets.
Jan 26 09:38:51 compute-0 systemd[75791]: Reached target Basic System.
Jan 26 09:38:51 compute-0 systemd[75791]: Reached target Main User Target.
Jan 26 09:38:51 compute-0 systemd[75791]: Startup finished in 112ms.
Jan 26 09:38:51 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 26 09:38:51 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 26 09:38:51 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 26 09:38:51 compute-0 sshd-session[75787]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:51 compute-0 sshd-session[75805]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:51 compute-0 sudo[75812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:38:51 compute-0 sudo[75812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:51 compute-0 sudo[75812]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:51 compute-0 sshd-session[75837]: Accepted publickey for ceph-admin from 192.168.122.100 port 42450 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:52 compute-0 systemd-logind[787]: New session 24 of user ceph-admin.
Jan 26 09:38:52 compute-0 ceph-mon[74456]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:52 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 26 09:38:52 compute-0 sshd-session[75837]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053102 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:38:52 compute-0 sudo[75841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 26 09:38:52 compute-0 sudo[75841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:52 compute-0 sudo[75841]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:52 compute-0 sshd-session[75866]: Accepted publickey for ceph-admin from 192.168.122.100 port 42460 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:52 compute-0 systemd-logind[787]: New session 25 of user ceph-admin.
Jan 26 09:38:52 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 26 09:38:52 compute-0 sshd-session[75866]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:52 compute-0 sudo[75870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Jan 26 09:38:52 compute-0 sudo[75870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:52 compute-0 sudo[75870]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:52 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 26 09:38:52 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 26 09:38:52 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:52 compute-0 sshd-session[75895]: Accepted publickey for ceph-admin from 192.168.122.100 port 42472 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:52 compute-0 systemd-logind[787]: New session 26 of user ceph-admin.
Jan 26 09:38:52 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 26 09:38:52 compute-0 sshd-session[75895]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:52 compute-0 sudo[75899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:52 compute-0 sudo[75899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:52 compute-0 sudo[75899]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:53 compute-0 ceph-mon[74456]: Deploying cephadm binary to compute-0
Jan 26 09:38:53 compute-0 sshd-session[75924]: Accepted publickey for ceph-admin from 192.168.122.100 port 42486 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:53 compute-0 systemd-logind[787]: New session 27 of user ceph-admin.
Jan 26 09:38:53 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 26 09:38:53 compute-0 sshd-session[75924]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:53 compute-0 sudo[75928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:53 compute-0 sudo[75928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:53 compute-0 sudo[75928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:53 compute-0 sshd-session[75953]: Accepted publickey for ceph-admin from 192.168.122.100 port 42498 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:53 compute-0 systemd-logind[787]: New session 28 of user ceph-admin.
Jan 26 09:38:53 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 26 09:38:53 compute-0 sshd-session[75953]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:53 compute-0 sudo[75957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Jan 26 09:38:53 compute-0 sudo[75957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:53 compute-0 sudo[75957]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:53 compute-0 sshd-session[75982]: Accepted publickey for ceph-admin from 192.168.122.100 port 42512 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:53 compute-0 systemd-logind[787]: New session 29 of user ceph-admin.
Jan 26 09:38:53 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 26 09:38:53 compute-0 sshd-session[75982]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:53 compute-0 sudo[75986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:38:53 compute-0 sudo[75986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:53 compute-0 sudo[75986]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:54 compute-0 sshd-session[76011]: Accepted publickey for ceph-admin from 192.168.122.100 port 42518 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:54 compute-0 systemd-logind[787]: New session 30 of user ceph-admin.
Jan 26 09:38:54 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 26 09:38:54 compute-0 sshd-session[76011]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:54 compute-0 sudo[76015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Jan 26 09:38:54 compute-0 sudo[76015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:54 compute-0 sudo[76015]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:54 compute-0 sshd-session[76040]: Accepted publickey for ceph-admin from 192.168.122.100 port 42528 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:54 compute-0 systemd-logind[787]: New session 31 of user ceph-admin.
Jan 26 09:38:54 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 26 09:38:54 compute-0 sshd-session[76040]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:54 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:55 compute-0 sshd-session[76067]: Accepted publickey for ceph-admin from 192.168.122.100 port 42532 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:55 compute-0 systemd-logind[787]: New session 32 of user ceph-admin.
Jan 26 09:38:55 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 26 09:38:55 compute-0 sshd-session[76067]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:55 compute-0 sudo[76071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Jan 26 09:38:55 compute-0 sudo[76071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:55 compute-0 sudo[76071]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:55 compute-0 sshd-session[76096]: Accepted publickey for ceph-admin from 192.168.122.100 port 42544 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:38:55 compute-0 systemd-logind[787]: New session 33 of user ceph-admin.
Jan 26 09:38:55 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 26 09:38:55 compute-0 sshd-session[76096]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:38:56 compute-0 sudo[76100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 26 09:38:56 compute-0 sudo[76100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:56 compute-0 sudo[76100]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:38:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:56 compute-0 ceph-mgr[74755]: [cephadm INFO root] Added host compute-0
Jan 26 09:38:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 26 09:38:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 26 09:38:56 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:56 compute-0 objective_volhard[75760]: Added host 'compute-0' with addr '192.168.122.100'
Jan 26 09:38:56 compute-0 systemd[1]: libpod-fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d.scope: Deactivated successfully.
Jan 26 09:38:56 compute-0 podman[75744]: 2026-01-26 09:38:56.538345001 +0000 UTC m=+5.925183370 container died fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d (image=quay.io/ceph/ceph:v19, name=objective_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:56 compute-0 sudo[76145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:38:56 compute-0 sudo[76145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:56 compute-0 sudo[76145]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-248918d66e3216c850b72e8f5e63a6f716d1445d3113f516dd6480a1ea08ac8f-merged.mount: Deactivated successfully.
Jan 26 09:38:56 compute-0 podman[75744]: 2026-01-26 09:38:56.603971415 +0000 UTC m=+5.990809774 container remove fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d (image=quay.io/ceph/ceph:v19, name=objective_volhard, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:38:56 compute-0 systemd[1]: libpod-conmon-fbdc3ea06965e93a5902ededf116e07dd4815f0fd2fa0bd4821cd000a2030f5d.scope: Deactivated successfully.
Jan 26 09:38:56 compute-0 sudo[76183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Jan 26 09:38:56 compute-0 sudo[76183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:56 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:56 compute-0 podman[76196]: 2026-01-26 09:38:56.644718809 +0000 UTC m=+0.021961702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:56 compute-0 podman[76196]: 2026-01-26 09:38:56.739994693 +0000 UTC m=+0.117237576 container create 7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad (image=quay.io/ceph/ceph:v19, name=ecstatic_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:56 compute-0 systemd[1]: Started libpod-conmon-7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad.scope.
Jan 26 09:38:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753d335393f4a09dde211f99cf92dc1240b5031614b3b87b121a6a7036eaf4ab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753d335393f4a09dde211f99cf92dc1240b5031614b3b87b121a6a7036eaf4ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/753d335393f4a09dde211f99cf92dc1240b5031614b3b87b121a6a7036eaf4ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:56 compute-0 podman[76196]: 2026-01-26 09:38:56.814850189 +0000 UTC m=+0.192093072 container init 7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad (image=quay.io/ceph/ceph:v19, name=ecstatic_ganguly, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:38:56 compute-0 podman[76196]: 2026-01-26 09:38:56.821395768 +0000 UTC m=+0.198638641 container start 7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad (image=quay.io/ceph/ceph:v19, name=ecstatic_ganguly, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 09:38:56 compute-0 podman[76196]: 2026-01-26 09:38:56.825165851 +0000 UTC m=+0.202408744 container attach 7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad (image=quay.io/ceph/ceph:v19, name=ecstatic_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 26 09:38:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:38:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:57 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 26 09:38:57 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 26 09:38:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 26 09:38:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:57 compute-0 ecstatic_ganguly[76225]: Scheduled mon update...
Jan 26 09:38:57 compute-0 systemd[1]: libpod-7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad.scope: Deactivated successfully.
Jan 26 09:38:57 compute-0 podman[76196]: 2026-01-26 09:38:57.330185505 +0000 UTC m=+0.707428378 container died 7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad (image=quay.io/ceph/ceph:v19, name=ecstatic_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-753d335393f4a09dde211f99cf92dc1240b5031614b3b87b121a6a7036eaf4ab-merged.mount: Deactivated successfully.
Jan 26 09:38:57 compute-0 podman[76196]: 2026-01-26 09:38:57.365557042 +0000 UTC m=+0.742799915 container remove 7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad (image=quay.io/ceph/ceph:v19, name=ecstatic_ganguly, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:57 compute-0 systemd[1]: libpod-conmon-7597e845557912dbd36e9b45a1899c37026970ee34095438b4340fa08fff77ad.scope: Deactivated successfully.
Jan 26 09:38:57 compute-0 podman[76241]: 2026-01-26 09:38:57.410800829 +0000 UTC m=+0.531279563 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:57 compute-0 podman[76289]: 2026-01-26 09:38:57.428786491 +0000 UTC m=+0.043154341 container create a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8 (image=quay.io/ceph/ceph:v19, name=competent_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Jan 26 09:38:57 compute-0 systemd[1]: Started libpod-conmon-a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8.scope.
Jan 26 09:38:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8eb65fd893eff91b417d4f6deb3f8890f87889a76fdc738cb50d176e6546760/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8eb65fd893eff91b417d4f6deb3f8890f87889a76fdc738cb50d176e6546760/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8eb65fd893eff91b417d4f6deb3f8890f87889a76fdc738cb50d176e6546760/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:57 compute-0 podman[76289]: 2026-01-26 09:38:57.485313125 +0000 UTC m=+0.099680995 container init a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8 (image=quay.io/ceph/ceph:v19, name=competent_jemison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:38:57 compute-0 podman[76289]: 2026-01-26 09:38:57.489785297 +0000 UTC m=+0.104153147 container start a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8 (image=quay.io/ceph/ceph:v19, name=competent_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:38:57 compute-0 podman[76289]: 2026-01-26 09:38:57.493585802 +0000 UTC m=+0.107953652 container attach a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8 (image=quay.io/ceph/ceph:v19, name=competent_jemison, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 09:38:57 compute-0 podman[76289]: 2026-01-26 09:38:57.413169584 +0000 UTC m=+0.027537464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:57 compute-0 podman[76316]: 2026-01-26 09:38:57.513528276 +0000 UTC m=+0.041848824 container create f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d (image=quay.io/ceph/ceph:v19, name=clever_kare, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:57 compute-0 ceph-mon[74456]: Added host compute-0
Jan 26 09:38:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:38:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:57 compute-0 systemd[1]: Started libpod-conmon-f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d.scope.
Jan 26 09:38:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:57 compute-0 podman[76316]: 2026-01-26 09:38:57.583099658 +0000 UTC m=+0.111420226 container init f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d (image=quay.io/ceph/ceph:v19, name=clever_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:38:57 compute-0 podman[76316]: 2026-01-26 09:38:57.491136204 +0000 UTC m=+0.019456772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:57 compute-0 podman[76316]: 2026-01-26 09:38:57.589459933 +0000 UTC m=+0.117780491 container start f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d (image=quay.io/ceph/ceph:v19, name=clever_kare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 09:38:57 compute-0 podman[76316]: 2026-01-26 09:38:57.592806964 +0000 UTC m=+0.121127522 container attach f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d (image=quay.io/ceph/ceph:v19, name=clever_kare, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 09:38:57 compute-0 clever_kare[76336]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 26 09:38:57 compute-0 systemd[1]: libpod-f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d.scope: Deactivated successfully.
Jan 26 09:38:57 compute-0 conmon[76336]: conmon f0f3e5b9ade00adfdf19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d.scope/container/memory.events
Jan 26 09:38:57 compute-0 podman[76316]: 2026-01-26 09:38:57.681166879 +0000 UTC m=+0.209487427 container died f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d (image=quay.io/ceph/ceph:v19, name=clever_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c407159bdcc5f7dc25fd42eae24e79e840943b6d1b43fca917cd35d9e324e26-merged.mount: Deactivated successfully.
Jan 26 09:38:57 compute-0 podman[76316]: 2026-01-26 09:38:57.718361176 +0000 UTC m=+0.246681724 container remove f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d (image=quay.io/ceph/ceph:v19, name=clever_kare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 09:38:57 compute-0 systemd[1]: libpod-conmon-f0f3e5b9ade00adfdf197e36d43f5506016fc0ab5723102956a3814233ce7e2d.scope: Deactivated successfully.
Jan 26 09:38:57 compute-0 sudo[76183]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 26 09:38:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:57 compute-0 sudo[76371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:38:57 compute-0 sudo[76371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:57 compute-0 sudo[76371]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:57 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 26 09:38:57 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 26 09:38:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 26 09:38:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:57 compute-0 competent_jemison[76315]: Scheduled mgr update...
Jan 26 09:38:57 compute-0 systemd[1]: libpod-a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8.scope: Deactivated successfully.
Jan 26 09:38:57 compute-0 podman[76289]: 2026-01-26 09:38:57.862620139 +0000 UTC m=+0.476988009 container died a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8 (image=quay.io/ceph/ceph:v19, name=competent_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:38:57 compute-0 sudo[76397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 26 09:38:57 compute-0 sudo[76397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8eb65fd893eff91b417d4f6deb3f8890f87889a76fdc738cb50d176e6546760-merged.mount: Deactivated successfully.
Jan 26 09:38:57 compute-0 podman[76289]: 2026-01-26 09:38:57.935406989 +0000 UTC m=+0.549774839 container remove a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8 (image=quay.io/ceph/ceph:v19, name=competent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:57 compute-0 systemd[1]: libpod-conmon-a83454de74dd9ca1c616febeec8487606e83570b9dd8cbe1ea7ac7be98140ae8.scope: Deactivated successfully.
Jan 26 09:38:57 compute-0 podman[76434]: 2026-01-26 09:38:57.988549221 +0000 UTC m=+0.035862451 container create 6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1 (image=quay.io/ceph/ceph:v19, name=frosty_almeida, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:38:58 compute-0 systemd[1]: Started libpod-conmon-6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1.scope.
Jan 26 09:38:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf63869cd5571459682d3365a577834756c295465f23e0d86858e60d8bef9b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf63869cd5571459682d3365a577834756c295465f23e0d86858e60d8bef9b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf63869cd5571459682d3365a577834756c295465f23e0d86858e60d8bef9b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:58 compute-0 podman[76434]: 2026-01-26 09:38:58.045122197 +0000 UTC m=+0.092435447 container init 6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1 (image=quay.io/ceph/ceph:v19, name=frosty_almeida, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 09:38:58 compute-0 podman[76434]: 2026-01-26 09:38:58.050678389 +0000 UTC m=+0.097991619 container start 6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1 (image=quay.io/ceph/ceph:v19, name=frosty_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:38:58 compute-0 podman[76434]: 2026-01-26 09:38:58.053769063 +0000 UTC m=+0.101082313 container attach 6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1 (image=quay.io/ceph/ceph:v19, name=frosty_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:58 compute-0 podman[76434]: 2026-01-26 09:38:57.973580562 +0000 UTC m=+0.020893812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:58 compute-0 sudo[76397]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:38:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:58 compute-0 sudo[76495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:38:58 compute-0 sudo[76495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:58 compute-0 sudo[76495]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:58 compute-0 sudo[76520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:38:58 compute-0 sudo[76520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:58 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:58 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service crash spec with placement *
Jan 26 09:38:58 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 26 09:38:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 26 09:38:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:58 compute-0 frosty_almeida[76451]: Scheduled crash update...
Jan 26 09:38:58 compute-0 systemd[1]: libpod-6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1.scope: Deactivated successfully.
Jan 26 09:38:58 compute-0 podman[76434]: 2026-01-26 09:38:58.403235176 +0000 UTC m=+0.450548416 container died 6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1 (image=quay.io/ceph/ceph:v19, name=frosty_almeida, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cf63869cd5571459682d3365a577834756c295465f23e0d86858e60d8bef9b4-merged.mount: Deactivated successfully.
Jan 26 09:38:58 compute-0 podman[76434]: 2026-01-26 09:38:58.437994556 +0000 UTC m=+0.485307786 container remove 6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1 (image=quay.io/ceph/ceph:v19, name=frosty_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:58 compute-0 systemd[1]: libpod-conmon-6a822f1da61d461cffc520164335cd45316a259ab2edc896f8d4647d7f1297a1.scope: Deactivated successfully.
Jan 26 09:38:58 compute-0 podman[76566]: 2026-01-26 09:38:58.495071536 +0000 UTC m=+0.036077787 container create ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82 (image=quay.io/ceph/ceph:v19, name=nostalgic_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:38:58 compute-0 systemd[1]: Started libpod-conmon-ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82.scope.
Jan 26 09:38:58 compute-0 ceph-mon[74456]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:58 compute-0 ceph-mon[74456]: Saving service mon spec with placement count:5
Jan 26 09:38:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f51918917cbdb80e853fe7b72a7616900799921ce4449af627d1db4a76ca940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f51918917cbdb80e853fe7b72a7616900799921ce4449af627d1db4a76ca940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f51918917cbdb80e853fe7b72a7616900799921ce4449af627d1db4a76ca940/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:58 compute-0 podman[76566]: 2026-01-26 09:38:58.558377277 +0000 UTC m=+0.099383558 container init ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82 (image=quay.io/ceph/ceph:v19, name=nostalgic_lewin, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 26 09:38:58 compute-0 podman[76566]: 2026-01-26 09:38:58.564245867 +0000 UTC m=+0.105252108 container start ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82 (image=quay.io/ceph/ceph:v19, name=nostalgic_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:38:58 compute-0 podman[76566]: 2026-01-26 09:38:58.568957016 +0000 UTC m=+0.109963277 container attach ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82 (image=quay.io/ceph/ceph:v19, name=nostalgic_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 09:38:58 compute-0 podman[76566]: 2026-01-26 09:38:58.476997913 +0000 UTC m=+0.018004184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:58 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:38:58 compute-0 podman[76668]: 2026-01-26 09:38:58.752748329 +0000 UTC m=+0.045107454 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:38:58 compute-0 podman[76668]: 2026-01-26 09:38:58.843608303 +0000 UTC m=+0.135967428 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:38:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 26 09:38:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3916431746' entity='client.admin' 
Jan 26 09:38:58 compute-0 systemd[1]: libpod-ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82.scope: Deactivated successfully.
Jan 26 09:38:58 compute-0 podman[76719]: 2026-01-26 09:38:58.965411492 +0000 UTC m=+0.027447800 container died ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82 (image=quay.io/ceph/ceph:v19, name=nostalgic_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 26 09:38:58 compute-0 sudo[76520]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:38:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f51918917cbdb80e853fe7b72a7616900799921ce4449af627d1db4a76ca940-merged.mount: Deactivated successfully.
Jan 26 09:38:58 compute-0 podman[76719]: 2026-01-26 09:38:58.997266453 +0000 UTC m=+0.059302751 container remove ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82 (image=quay.io/ceph/ceph:v19, name=nostalgic_lewin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 09:38:59 compute-0 systemd[1]: libpod-conmon-ff4c93f7b7f17608622e47254d0f15d618003440b674053a6bf35e86189b2e82.scope: Deactivated successfully.
Jan 26 09:38:59 compute-0 sudo[76734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:38:59 compute-0 sudo[76734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:59 compute-0 sudo[76734]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:59 compute-0 podman[76756]: 2026-01-26 09:38:59.061260423 +0000 UTC m=+0.041545347 container create d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb (image=quay.io/ceph/ceph:v19, name=upbeat_chebyshev, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:59 compute-0 systemd[1]: Started libpod-conmon-d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb.scope.
Jan 26 09:38:59 compute-0 sudo[76773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:38:59 compute-0 sudo[76773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae721c2c1f09eab6bc8aeaa3ca50a9808382fc69f1315850b4309e4ff44e456/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae721c2c1f09eab6bc8aeaa3ca50a9808382fc69f1315850b4309e4ff44e456/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae721c2c1f09eab6bc8aeaa3ca50a9808382fc69f1315850b4309e4ff44e456/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:38:59 compute-0 podman[76756]: 2026-01-26 09:38:59.129617351 +0000 UTC m=+0.109902305 container init d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb (image=quay.io/ceph/ceph:v19, name=upbeat_chebyshev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:38:59 compute-0 podman[76756]: 2026-01-26 09:38:59.041746359 +0000 UTC m=+0.022031303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:38:59 compute-0 podman[76756]: 2026-01-26 09:38:59.139020168 +0000 UTC m=+0.119305092 container start d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb (image=quay.io/ceph/ceph:v19, name=upbeat_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:59 compute-0 podman[76756]: 2026-01-26 09:38:59.14201676 +0000 UTC m=+0.122301684 container attach d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb (image=quay.io/ceph/ceph:v19, name=upbeat_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:38:59 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76837 (sysctl)
Jan 26 09:38:59 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 26 09:38:59 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 26 09:38:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 26 09:38:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:59 compute-0 systemd[1]: libpod-d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb.scope: Deactivated successfully.
Jan 26 09:38:59 compute-0 podman[76756]: 2026-01-26 09:38:59.554277669 +0000 UTC m=+0.534562593 container died d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb (image=quay.io/ceph/ceph:v19, name=upbeat_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Jan 26 09:38:59 compute-0 sudo[76773]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:59 compute-0 ceph-mon[74456]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:59 compute-0 ceph-mon[74456]: Saving service mgr spec with placement count:2
Jan 26 09:38:59 compute-0 ceph-mon[74456]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:38:59 compute-0 ceph-mon[74456]: Saving service crash spec with placement *
Jan 26 09:38:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3916431746' entity='client.admin' 
Jan 26 09:38:59 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:59 compute-0 sudo[76872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:38:59 compute-0 sudo[76872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:59 compute-0 sudo[76872]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:59 compute-0 sudo[76897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 26 09:38:59 compute-0 sudo[76897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae721c2c1f09eab6bc8aeaa3ca50a9808382fc69f1315850b4309e4ff44e456-merged.mount: Deactivated successfully.
Jan 26 09:38:59 compute-0 podman[76756]: 2026-01-26 09:38:59.911724809 +0000 UTC m=+0.892009753 container remove d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb (image=quay.io/ceph/ceph:v19, name=upbeat_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:38:59 compute-0 sudo[76897]: pam_unix(sudo:session): session closed for user root
Jan 26 09:38:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:38:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:38:59 compute-0 podman[76939]: 2026-01-26 09:38:59.980689884 +0000 UTC m=+0.045186096 container create e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d (image=quay.io/ceph/ceph:v19, name=zealous_knuth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:39:00 compute-0 systemd[1]: Started libpod-conmon-e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d.scope.
Jan 26 09:39:00 compute-0 systemd[1]: libpod-conmon-d343a102bc6ba588b98b3462454636ddd4e16a6e4b1a4c09b744575aef757ceb.scope: Deactivated successfully.
Jan 26 09:39:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:00 compute-0 sudo[76953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40731be47ad893e457b0447374452871703d8ed67791f9f897bf4f9d89a67350/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40731be47ad893e457b0447374452871703d8ed67791f9f897bf4f9d89a67350/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40731be47ad893e457b0447374452871703d8ed67791f9f897bf4f9d89a67350/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:00 compute-0 sudo[76953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:00 compute-0 sudo[76953]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:00 compute-0 podman[76939]: 2026-01-26 09:38:59.965324015 +0000 UTC m=+0.029820247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:00 compute-0 podman[76939]: 2026-01-26 09:39:00.060702382 +0000 UTC m=+0.125198604 container init e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d (image=quay.io/ceph/ceph:v19, name=zealous_knuth, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:39:00 compute-0 podman[76939]: 2026-01-26 09:39:00.066303145 +0000 UTC m=+0.130799367 container start e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d (image=quay.io/ceph/ceph:v19, name=zealous_knuth, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:39:00 compute-0 podman[76939]: 2026-01-26 09:39:00.069150113 +0000 UTC m=+0.133646325 container attach e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d (image=quay.io/ceph/ceph:v19, name=zealous_knuth, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 09:39:00 compute-0 sudo[76984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- inventory --format=json-pretty --filter-for-batch
Jan 26 09:39:00 compute-0 sudo[76984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:00 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:39:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:39:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:00 compute-0 ceph-mgr[74755]: [cephadm INFO root] Added label _admin to host compute-0
Jan 26 09:39:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 26 09:39:00 compute-0 zealous_knuth[76979]: Added label _admin to host compute-0
Jan 26 09:39:00 compute-0 systemd[1]: libpod-e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d.scope: Deactivated successfully.
Jan 26 09:39:00 compute-0 podman[76939]: 2026-01-26 09:39:00.427014614 +0000 UTC m=+0.491510816 container died e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d (image=quay.io/ceph/ceph:v19, name=zealous_knuth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:39:00 compute-0 podman[77068]: 2026-01-26 09:39:00.439548097 +0000 UTC m=+0.044928829 container create 5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keller, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-40731be47ad893e457b0447374452871703d8ed67791f9f897bf4f9d89a67350-merged.mount: Deactivated successfully.
Jan 26 09:39:00 compute-0 podman[76939]: 2026-01-26 09:39:00.466062671 +0000 UTC m=+0.530558873 container remove e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d (image=quay.io/ceph/ceph:v19, name=zealous_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:39:00 compute-0 systemd[1]: Started libpod-conmon-5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76.scope.
Jan 26 09:39:00 compute-0 systemd[1]: libpod-conmon-e22bea2b0f4662bb324da147c25b35a89abad0eadd7277af7f43cad995c1af1d.scope: Deactivated successfully.
Jan 26 09:39:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:00 compute-0 podman[77068]: 2026-01-26 09:39:00.494738506 +0000 UTC m=+0.100119248 container init 5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 26 09:39:00 compute-0 podman[77068]: 2026-01-26 09:39:00.498606801 +0000 UTC m=+0.103987533 container start 5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 09:39:00 compute-0 tender_keller[77099]: 167 167
Jan 26 09:39:00 compute-0 podman[77068]: 2026-01-26 09:39:00.501283824 +0000 UTC m=+0.106664556 container attach 5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:00 compute-0 systemd[1]: libpod-5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76.scope: Deactivated successfully.
Jan 26 09:39:00 compute-0 conmon[77099]: conmon 5313d26e0bbbe520ae60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76.scope/container/memory.events
Jan 26 09:39:00 compute-0 podman[77068]: 2026-01-26 09:39:00.503185247 +0000 UTC m=+0.108566009 container died 5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:39:00 compute-0 podman[77068]: 2026-01-26 09:39:00.416066705 +0000 UTC m=+0.021447457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:39:00 compute-0 podman[77098]: 2026-01-26 09:39:00.52051121 +0000 UTC m=+0.036646823 container create 5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd (image=quay.io/ceph/ceph:v19, name=serene_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 09:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7969ebf9b69fb6b4754eac4e92ab4812fa353b8ca5d94a2b2aad6bacd4d65fd-merged.mount: Deactivated successfully.
Jan 26 09:39:00 compute-0 podman[77068]: 2026-01-26 09:39:00.537568896 +0000 UTC m=+0.142949618 container remove 5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:39:00 compute-0 systemd[1]: libpod-conmon-5313d26e0bbbe520ae60e55d5657cf480fefed2441a1b1d00cb6bb457da91c76.scope: Deactivated successfully.
Jan 26 09:39:00 compute-0 systemd[1]: Started libpod-conmon-5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd.scope.
Jan 26 09:39:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43425a7633e063c20b6046bed324f159160cdf0890a6b6c25c569878987dbf01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43425a7633e063c20b6046bed324f159160cdf0890a6b6c25c569878987dbf01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43425a7633e063c20b6046bed324f159160cdf0890a6b6c25c569878987dbf01/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:00 compute-0 podman[77098]: 2026-01-26 09:39:00.598915083 +0000 UTC m=+0.115050716 container init 5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd (image=quay.io/ceph/ceph:v19, name=serene_pike, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 09:39:00 compute-0 podman[77098]: 2026-01-26 09:39:00.504915124 +0000 UTC m=+0.021050767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:00 compute-0 podman[77098]: 2026-01-26 09:39:00.604407103 +0000 UTC m=+0.120542716 container start 5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd (image=quay.io/ceph/ceph:v19, name=serene_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:39:00 compute-0 podman[77098]: 2026-01-26 09:39:00.607776245 +0000 UTC m=+0.123911878 container attach 5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd (image=quay.io/ceph/ceph:v19, name=serene_pike, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:39:00 compute-0 ceph-mon[74456]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:39:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:00 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:39:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 26 09:39:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3196933580' entity='client.admin' 
Jan 26 09:39:01 compute-0 serene_pike[77132]: set mgr/dashboard/cluster/status
Jan 26 09:39:01 compute-0 systemd[1]: libpod-5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd.scope: Deactivated successfully.
Jan 26 09:39:01 compute-0 podman[77098]: 2026-01-26 09:39:01.077354521 +0000 UTC m=+0.593490134 container died 5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd (image=quay.io/ceph/ceph:v19, name=serene_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-43425a7633e063c20b6046bed324f159160cdf0890a6b6c25c569878987dbf01-merged.mount: Deactivated successfully.
Jan 26 09:39:01 compute-0 podman[77098]: 2026-01-26 09:39:01.112268875 +0000 UTC m=+0.628404488 container remove 5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd (image=quay.io/ceph/ceph:v19, name=serene_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:39:01 compute-0 systemd[1]: libpod-conmon-5b46e45e6ea6fa5238d215e700c4671f39ebc674f287f042bb37e7d7e08a9cdd.scope: Deactivated successfully.
Jan 26 09:39:01 compute-0 sudo[73401]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:01 compute-0 podman[77177]: 2026-01-26 09:39:01.305266681 +0000 UTC m=+0.036437088 container create b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swartz, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:39:01 compute-0 systemd[1]: Started libpod-conmon-b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030.scope.
Jan 26 09:39:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd2fb510dc602bd64cd192fdae0f8e2c47d79ef4ff2e4ac3d951deb4adde45c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd2fb510dc602bd64cd192fdae0f8e2c47d79ef4ff2e4ac3d951deb4adde45c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd2fb510dc602bd64cd192fdae0f8e2c47d79ef4ff2e4ac3d951deb4adde45c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bd2fb510dc602bd64cd192fdae0f8e2c47d79ef4ff2e4ac3d951deb4adde45c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:01 compute-0 podman[77177]: 2026-01-26 09:39:01.366641668 +0000 UTC m=+0.097812075 container init b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:39:01 compute-0 podman[77177]: 2026-01-26 09:39:01.372066876 +0000 UTC m=+0.103237283 container start b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swartz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:39:01 compute-0 podman[77177]: 2026-01-26 09:39:01.375421728 +0000 UTC m=+0.106592135 container attach b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:39:01 compute-0 podman[77177]: 2026-01-26 09:39:01.290600299 +0000 UTC m=+0.021770726 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:39:01 compute-0 sudo[77226]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ranykadbhvhmffdwlptrsegxtcprssqv ; /usr/bin/python3'
Jan 26 09:39:01 compute-0 sudo[77226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:01 compute-0 python3[77228]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:01 compute-0 podman[77245]: 2026-01-26 09:39:01.794266707 +0000 UTC m=+0.021916121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:01 compute-0 ceph-mon[74456]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:39:01 compute-0 ceph-mon[74456]: Added label _admin to host compute-0
Jan 26 09:39:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3196933580' entity='client.admin' 
Jan 26 09:39:01 compute-0 podman[77245]: 2026-01-26 09:39:01.993510193 +0000 UTC m=+0.221159557 container create 3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092 (image=quay.io/ceph/ceph:v19, name=festive_hypatia, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:39:02 compute-0 systemd[1]: Started libpod-conmon-3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092.scope.
Jan 26 09:39:02 compute-0 sad_swartz[77192]: [
Jan 26 09:39:02 compute-0 sad_swartz[77192]:     {
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "available": false,
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "being_replaced": false,
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "ceph_device_lvm": false,
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "lsm_data": {},
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "lvs": [],
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "path": "/dev/sr0",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "rejected_reasons": [
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "Has a FileSystem",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "Insufficient space (<5GB)"
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         ],
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         "sys_api": {
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "actuators": null,
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "device_nodes": [
Jan 26 09:39:02 compute-0 sad_swartz[77192]:                 "sr0"
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             ],
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "devname": "sr0",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "human_readable_size": "482.00 KB",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "id_bus": "ata",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "model": "QEMU DVD-ROM",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "nr_requests": "2",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "parent": "/dev/sr0",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "partitions": {},
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "path": "/dev/sr0",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "removable": "1",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "rev": "2.5+",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "ro": "0",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "rotational": "1",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "sas_address": "",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "sas_device_handle": "",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "scheduler_mode": "mq-deadline",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "sectors": 0,
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "sectorsize": "2048",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "size": 493568.0,
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "support_discard": "2048",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "type": "disk",
Jan 26 09:39:02 compute-0 sad_swartz[77192]:             "vendor": "QEMU"
Jan 26 09:39:02 compute-0 sad_swartz[77192]:         }
Jan 26 09:39:02 compute-0 sad_swartz[77192]:     }
Jan 26 09:39:02 compute-0 sad_swartz[77192]: ]
Jan 26 09:39:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04d713b0144f4e61248cbd2e6757577406d19ec35d37052f23c2adabf149330/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04d713b0144f4e61248cbd2e6757577406d19ec35d37052f23c2adabf149330/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:02 compute-0 podman[77245]: 2026-01-26 09:39:02.066607921 +0000 UTC m=+0.294257285 container init 3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092 (image=quay.io/ceph/ceph:v19, name=festive_hypatia, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:39:02 compute-0 systemd[1]: libpod-b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030.scope: Deactivated successfully.
Jan 26 09:39:02 compute-0 conmon[77192]: conmon b6d028ad126f4b6a436d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030.scope/container/memory.events
Jan 26 09:39:02 compute-0 podman[77177]: 2026-01-26 09:39:02.069278994 +0000 UTC m=+0.800449411 container died b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swartz, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 09:39:02 compute-0 podman[77245]: 2026-01-26 09:39:02.07355486 +0000 UTC m=+0.301204224 container start 3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092 (image=quay.io/ceph/ceph:v19, name=festive_hypatia, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:39:02 compute-0 podman[77245]: 2026-01-26 09:39:02.080462909 +0000 UTC m=+0.308112273 container attach 3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092 (image=quay.io/ceph/ceph:v19, name=festive_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bd2fb510dc602bd64cd192fdae0f8e2c47d79ef4ff2e4ac3d951deb4adde45c-merged.mount: Deactivated successfully.
Jan 26 09:39:02 compute-0 podman[77177]: 2026-01-26 09:39:02.111375104 +0000 UTC m=+0.842545511 container remove b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:39:02 compute-0 systemd[1]: libpod-conmon-b6d028ad126f4b6a436d3aa3e79893b016b2276b8e5e0b2a0e47ecdf06a91030.scope: Deactivated successfully.
Jan 26 09:39:02 compute-0 sudo[76984]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:02 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:39:02 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:39:02 compute-0 sudo[78282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:39:02 compute-0 sudo[78282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78282]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:39:02 compute-0 sudo[78322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78322]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:39:02 compute-0 sudo[78347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78347]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:39:02 compute-0 sudo[78372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78372]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 26 09:39:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1781213073' entity='client.admin' 
Jan 26 09:39:02 compute-0 systemd[1]: libpod-3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092.scope: Deactivated successfully.
Jan 26 09:39:02 compute-0 podman[77245]: 2026-01-26 09:39:02.445002844 +0000 UTC m=+0.672652208 container died 3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092 (image=quay.io/ceph/ceph:v19, name=festive_hypatia, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:39:02 compute-0 sudo[78397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:39:02 compute-0 sudo[78397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78397]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d04d713b0144f4e61248cbd2e6757577406d19ec35d37052f23c2adabf149330-merged.mount: Deactivated successfully.
Jan 26 09:39:02 compute-0 podman[77245]: 2026-01-26 09:39:02.49424351 +0000 UTC m=+0.721892874 container remove 3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092 (image=quay.io/ceph/ceph:v19, name=festive_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:39:02 compute-0 systemd[1]: libpod-conmon-3d5fcf29b1e8b717d2d3b6cde12ce08d27708bfa5ba67f2aa334b5f9d53cd092.scope: Deactivated successfully.
Jan 26 09:39:02 compute-0 sudo[77226]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:39:02 compute-0 sudo[78458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78458]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:39:02 compute-0 sudo[78483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78483]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:39:02 compute-0 sudo[78508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 26 09:39:02 compute-0 sudo[78508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78508]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:39:02 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:39:02 compute-0 sudo[78533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:39:02 compute-0 sudo[78533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78533]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:39:02 compute-0 sudo[78558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78558]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:39:02 compute-0 sudo[78583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78583]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:39:02 compute-0 sudo[78616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78616]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:02 compute-0 sudo[78664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:39:02 compute-0 sudo[78664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:02 compute-0 sudo[78664]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[78756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:39:03 compute-0 sudo[78756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[78756]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[78781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:39:03 compute-0 sudo[78781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[78781]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:03 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:39:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1781213073' entity='client.admin' 
Jan 26 09:39:03 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:39:03 compute-0 sudo[78806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:39:03 compute-0 sudo[78806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[78806]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:39:03 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:39:03 compute-0 sudo[78835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:39:03 compute-0 sudo[78835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[78835]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[78879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:39:03 compute-0 sudo[78879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[78879]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[78928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:39:03 compute-0 sudo[78928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[78976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrgtfyowzkvflvbeoqufewkpqokyaauq ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769420342.8675847-37183-249966542270016/async_wrapper.py j913583833817 30 /home/zuul/.ansible/tmp/ansible-tmp-1769420342.8675847-37183-249966542270016/AnsiballZ_command.py _'
Jan 26 09:39:03 compute-0 sudo[78976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:03 compute-0 sudo[78928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[78981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:39:03 compute-0 sudo[78981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[78981]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[79006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:39:03 compute-0 sudo[79006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79006]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 ansible-async_wrapper.py[78980]: Invoked with j913583833817 30 /home/zuul/.ansible/tmp/ansible-tmp-1769420342.8675847-37183-249966542270016/AnsiballZ_command.py _
Jan 26 09:39:03 compute-0 ansible-async_wrapper.py[79039]: Starting module and watcher
Jan 26 09:39:03 compute-0 ansible-async_wrapper.py[79039]: Start watching 79042 (30)
Jan 26 09:39:03 compute-0 ansible-async_wrapper.py[79042]: Start module (79042)
Jan 26 09:39:03 compute-0 ansible-async_wrapper.py[78980]: Return async_wrapper task started.
Jan 26 09:39:03 compute-0 sudo[78976]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[79059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:39:03 compute-0 sudo[79059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79059]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[79084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:39:03 compute-0 sudo[79084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79084]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 python3[79047]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:03 compute-0 sudo[79109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 26 09:39:03 compute-0 sudo[79109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79109]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:39:03 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:39:03 compute-0 podman[79112]: 2026-01-26 09:39:03.668213309 +0000 UTC m=+0.042681617 container create 013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a (image=quay.io/ceph/ceph:v19, name=hopeful_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 09:39:03 compute-0 systemd[1]: Started libpod-conmon-013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a.scope.
Jan 26 09:39:03 compute-0 sudo[79147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:39:03 compute-0 sudo[79147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79147]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d727066c09462c257a1c168f8b0dd5be6bb135b5b839066a03d3f279cb64a952/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d727066c09462c257a1c168f8b0dd5be6bb135b5b839066a03d3f279cb64a952/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:03 compute-0 podman[79112]: 2026-01-26 09:39:03.73333843 +0000 UTC m=+0.107806758 container init 013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a (image=quay.io/ceph/ceph:v19, name=hopeful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:39:03 compute-0 podman[79112]: 2026-01-26 09:39:03.740772143 +0000 UTC m=+0.115240451 container start 013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a (image=quay.io/ceph/ceph:v19, name=hopeful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:39:03 compute-0 podman[79112]: 2026-01-26 09:39:03.647940665 +0000 UTC m=+0.022409023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:03 compute-0 podman[79112]: 2026-01-26 09:39:03.743717143 +0000 UTC m=+0.118185451 container attach 013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a (image=quay.io/ceph/ceph:v19, name=hopeful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Jan 26 09:39:03 compute-0 sudo[79177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:39:03 compute-0 sudo[79177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79177]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[79203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:39:03 compute-0 sudo[79203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79203]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[79228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:39:03 compute-0 sudo[79228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79228]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:03 compute-0 sudo[79272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:39:03 compute-0 sudo[79272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:03 compute-0 sudo[79272]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:04 compute-0 sudo[79320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:39:04 compute-0 sudo[79320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:04 compute-0 sudo[79320]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:04 compute-0 sudo[79345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:39:04 compute-0 sudo[79345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:04 compute-0 sudo[79345]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:04 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:39:04 compute-0 hopeful_almeida[79172]: 
Jan 26 09:39:04 compute-0 hopeful_almeida[79172]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 09:39:04 compute-0 systemd[1]: libpod-013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a.scope: Deactivated successfully.
Jan 26 09:39:04 compute-0 podman[79112]: 2026-01-26 09:39:04.128528512 +0000 UTC m=+0.502996880 container died 013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a (image=quay.io/ceph/ceph:v19, name=hopeful_almeida, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d727066c09462c257a1c168f8b0dd5be6bb135b5b839066a03d3f279cb64a952-merged.mount: Deactivated successfully.
Jan 26 09:39:04 compute-0 sudo[79371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:39:04 compute-0 sudo[79371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:04 compute-0 sudo[79371]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:04 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:39:04 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:39:04 compute-0 podman[79112]: 2026-01-26 09:39:04.165508992 +0000 UTC m=+0.539977320 container remove 013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a (image=quay.io/ceph/ceph:v19, name=hopeful_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:39:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:39:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:39:04 compute-0 systemd[1]: libpod-conmon-013aac62fe383aca05038c9bfef59ba9f511c58e666e0742d1122c3e05a6664a.scope: Deactivated successfully.
Jan 26 09:39:04 compute-0 ansible-async_wrapper.py[79042]: Module complete (79042)
Jan 26 09:39:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:39:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:04 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 45744610-4d55-40e5-bdfa-af74b7c7c9a0 (Updating crash deployment (+1 -> 1))
Jan 26 09:39:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 26 09:39:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:39:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 09:39:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:04 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 26 09:39:04 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 26 09:39:04 compute-0 sudo[79407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:39:04 compute-0 sudo[79407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:04 compute-0 sudo[79407]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:04 compute-0 sudo[79432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:39:04 compute-0 sudo[79432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:04 compute-0 ceph-mgr[74755]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 09:39:04 compute-0 sudo[79559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgdclrsgstxqiondrfeuwqaoeikqyvkk ; /usr/bin/python3'
Jan 26 09:39:04 compute-0 sudo[79559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:04 compute-0 podman[79524]: 2026-01-26 09:39:04.847984907 +0000 UTC m=+0.065352217 container create 68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 09:39:04 compute-0 systemd[1]: Started libpod-conmon-68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8.scope.
Jan 26 09:39:04 compute-0 podman[79524]: 2026-01-26 09:39:04.821406671 +0000 UTC m=+0.038774001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:39:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:04 compute-0 podman[79524]: 2026-01-26 09:39:04.935500529 +0000 UTC m=+0.152867829 container init 68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:39:04 compute-0 podman[79524]: 2026-01-26 09:39:04.941671738 +0000 UTC m=+0.159039018 container start 68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hellman, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:39:04 compute-0 podman[79524]: 2026-01-26 09:39:04.94505839 +0000 UTC m=+0.162425670 container attach 68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hellman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:39:04 compute-0 crazy_hellman[79566]: 167 167
Jan 26 09:39:04 compute-0 systemd[1]: libpod-68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8.scope: Deactivated successfully.
Jan 26 09:39:04 compute-0 podman[79524]: 2026-01-26 09:39:04.94906258 +0000 UTC m=+0.166429860 container died 68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-17302a52877295463c5512e2a5a80ee3cc392fed6dd82eac3b3edba897f346e4-merged.mount: Deactivated successfully.
Jan 26 09:39:04 compute-0 podman[79524]: 2026-01-26 09:39:04.98749214 +0000 UTC m=+0.204859420 container remove 68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hellman, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Jan 26 09:39:04 compute-0 systemd[1]: libpod-conmon-68e9bb777e82f9bb9bed7b50d44df9a71ea12fbb426ca6e962207ecddee0d7e8.scope: Deactivated successfully.
Jan 26 09:39:05 compute-0 python3[79563]: ansible-ansible.legacy.async_status Invoked with jid=j913583833817.78980 mode=status _async_dir=/root/.ansible_async
Jan 26 09:39:05 compute-0 systemd[1]: Reloading.
Jan 26 09:39:05 compute-0 sudo[79559]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:05 compute-0 systemd-sysv-generator[79631]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:39:05 compute-0 systemd-rc-local-generator[79628]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:39:05 compute-0 ceph-mon[74456]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:39:05 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:05 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:05 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:05 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:39:05 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 09:39:05 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:05 compute-0 ceph-mon[74456]: Deploying daemon crash.compute-0 on compute-0
Jan 26 09:39:05 compute-0 sudo[79663]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvbueuqsiuqhpxpauyzoylmviwputvca ; /usr/bin/python3'
Jan 26 09:39:05 compute-0 sudo[79663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:05 compute-0 systemd[1]: Reloading.
Jan 26 09:39:05 compute-0 systemd-sysv-generator[79699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:39:05 compute-0 systemd-rc-local-generator[79694]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:39:05 compute-0 python3[79667]: ansible-ansible.legacy.async_status Invoked with jid=j913583833817.78980 mode=cleanup _async_dir=/root/.ansible_async
Jan 26 09:39:05 compute-0 sudo[79663]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:05 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:39:05 compute-0 sudo[79767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soyfkwpqbczhhwxopjnkrfzempazqdzh ; /usr/bin/python3'
Jan 26 09:39:05 compute-0 sudo[79767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:05 compute-0 podman[79778]: 2026-01-26 09:39:05.845835172 +0000 UTC m=+0.048970170 container create 186f116697439f28b96fa37436d6642c21ea571810df83c79470564894aa7a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e11c6bdfc47220e006d527fd834c50383502eed3836f5d334782ca94adc897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e11c6bdfc47220e006d527fd834c50383502eed3836f5d334782ca94adc897/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e11c6bdfc47220e006d527fd834c50383502eed3836f5d334782ca94adc897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e11c6bdfc47220e006d527fd834c50383502eed3836f5d334782ca94adc897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:05 compute-0 python3[79775]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 09:39:05 compute-0 podman[79778]: 2026-01-26 09:39:05.819452072 +0000 UTC m=+0.022587090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:39:05 compute-0 podman[79778]: 2026-01-26 09:39:05.924483443 +0000 UTC m=+0.127618531 container init 186f116697439f28b96fa37436d6642c21ea571810df83c79470564894aa7a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:05 compute-0 sudo[79767]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:05 compute-0 podman[79778]: 2026-01-26 09:39:05.929280544 +0000 UTC m=+0.132415572 container start 186f116697439f28b96fa37436d6642c21ea571810df83c79470564894aa7a89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:39:05 compute-0 bash[79778]: 186f116697439f28b96fa37436d6642c21ea571810df83c79470564894aa7a89
Jan 26 09:39:05 compute-0 systemd[1]: Started Ceph crash.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:39:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 26 09:39:05 compute-0 sudo[79432]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:39:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:39:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 26 09:39:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 45744610-4d55-40e5-bdfa-af74b7c7c9a0 (Updating crash deployment (+1 -> 1))
Jan 26 09:39:06 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 45744610-4d55-40e5-bdfa-af74b7c7c9a0 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 26 09:39:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 26 09:39:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 26 09:39:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 26 09:39:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: 2026-01-26T09:39:06.086+0000 7ffb052cc640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 26 09:39:06 compute-0 sudo[79803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: 2026-01-26T09:39:06.086+0000 7ffb052cc640 -1 AuthRegistry(0x7ffb000698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: 2026-01-26T09:39:06.087+0000 7ffb052cc640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: 2026-01-26T09:39:06.087+0000 7ffb052cc640 -1 AuthRegistry(0x7ffb052caff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: 2026-01-26T09:39:06.088+0000 7ffafeffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: 2026-01-26T09:39:06.088+0000 7ffb052cc640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 26 09:39:06 compute-0 sudo[79803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 26 09:39:06 compute-0 sudo[79803]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:06 compute-0 sudo[79838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:39:06 compute-0 sudo[79838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:06 compute-0 sudo[79838]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:06 compute-0 sudo[79863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:39:06 compute-0 sudo[79863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:06 compute-0 sudo[79910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kandviiazezfclzjezpjjvfrchcuwadw ; /usr/bin/python3'
Jan 26 09:39:06 compute-0 sudo[79910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:06 compute-0 python3[79913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:06 compute-0 podman[79914]: 2026-01-26 09:39:06.435145961 +0000 UTC m=+0.045391051 container create 7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783 (image=quay.io/ceph/ceph:v19, name=eloquent_booth, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:39:06 compute-0 systemd[1]: Started libpod-conmon-7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783.scope.
Jan 26 09:39:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b69012371e3d221852a0ce9ff226c3b70c94afd78e11e1fbd55331a34e4c24/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:06 compute-0 podman[79914]: 2026-01-26 09:39:06.416592585 +0000 UTC m=+0.026837695 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b69012371e3d221852a0ce9ff226c3b70c94afd78e11e1fbd55331a34e4c24/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06b69012371e3d221852a0ce9ff226c3b70c94afd78e11e1fbd55331a34e4c24/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:06 compute-0 podman[79914]: 2026-01-26 09:39:06.528433262 +0000 UTC m=+0.138678372 container init 7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783 (image=quay.io/ceph/ceph:v19, name=eloquent_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:06 compute-0 podman[79914]: 2026-01-26 09:39:06.539086034 +0000 UTC m=+0.149331124 container start 7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783 (image=quay.io/ceph/ceph:v19, name=eloquent_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:39:06 compute-0 podman[79914]: 2026-01-26 09:39:06.542306321 +0000 UTC m=+0.152551431 container attach 7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783 (image=quay.io/ceph/ceph:v19, name=eloquent_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:39:06 compute-0 ceph-mgr[74755]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 26 09:39:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:06 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 26 09:39:06 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 1 completed events
Jan 26 09:39:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:39:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 podman[80025]: 2026-01-26 09:39:06.783621747 +0000 UTC m=+0.053912554 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:39:06 compute-0 podman[80025]: 2026-01-26 09:39:06.875575711 +0000 UTC m=+0.145866508 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:39:06 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:39:06 compute-0 eloquent_booth[79953]: 
Jan 26 09:39:06 compute-0 eloquent_booth[79953]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 09:39:06 compute-0 systemd[1]: libpod-7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783.scope: Deactivated successfully.
Jan 26 09:39:06 compute-0 podman[79914]: 2026-01-26 09:39:06.934016128 +0000 UTC m=+0.544261238 container died 7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783 (image=quay.io/ceph/ceph:v19, name=eloquent_booth, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-06b69012371e3d221852a0ce9ff226c3b70c94afd78e11e1fbd55331a34e4c24-merged.mount: Deactivated successfully.
Jan 26 09:39:06 compute-0 podman[79914]: 2026-01-26 09:39:06.970019302 +0000 UTC m=+0.580264392 container remove 7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783 (image=quay.io/ceph/ceph:v19, name=eloquent_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:06 compute-0 systemd[1]: libpod-conmon-7ac5b54986d24f7040d8d7ee55ae29428ed57d65c22d64618c81e7e52a0f6783.scope: Deactivated successfully.
Jan 26 09:39:06 compute-0 sudo[79910]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:06 compute-0 ceph-mon[74456]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:06 compute-0 ceph-mon[74456]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 26 09:39:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 sudo[79863]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 sshd-session[79922]: Invalid user admin from 157.245.76.178 port 52880
Jan 26 09:39:07 compute-0 sudo[80107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:39:07 compute-0 sudo[80107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:07 compute-0 sudo[80107]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:07 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 26 09:39:07 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:07 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:07 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 09:39:07 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 09:39:07 compute-0 sudo[80155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tocmexiksiyjwmzsbaxibnjohlztnmcy ; /usr/bin/python3'
Jan 26 09:39:07 compute-0 sudo[80155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:07 compute-0 sshd-session[79922]: Connection closed by invalid user admin 157.245.76.178 port 52880 [preauth]
Jan 26 09:39:07 compute-0 sudo[80156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:39:07 compute-0 sudo[80156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:07 compute-0 sudo[80156]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:07 compute-0 sudo[80183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:39:07 compute-0 sudo[80183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:07 compute-0 python3[80165]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:07 compute-0 podman[80208]: 2026-01-26 09:39:07.526229597 +0000 UTC m=+0.046297297 container create cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa (image=quay.io/ceph/ceph:v19, name=wonderful_sammet, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:39:07 compute-0 systemd[1]: Started libpod-conmon-cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa.scope.
Jan 26 09:39:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e373f5f159601bff3c3b846014869c4d8a6dde542bd5144f6dc350c2fa345ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e373f5f159601bff3c3b846014869c4d8a6dde542bd5144f6dc350c2fa345ad/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e373f5f159601bff3c3b846014869c4d8a6dde542bd5144f6dc350c2fa345ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:07 compute-0 podman[80208]: 2026-01-26 09:39:07.510144647 +0000 UTC m=+0.030212367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:07 compute-0 podman[80208]: 2026-01-26 09:39:07.614146839 +0000 UTC m=+0.134214559 container init cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa (image=quay.io/ceph/ceph:v19, name=wonderful_sammet, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 26 09:39:07 compute-0 podman[80208]: 2026-01-26 09:39:07.620837762 +0000 UTC m=+0.140905452 container start cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa (image=quay.io/ceph/ceph:v19, name=wonderful_sammet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 09:39:07 compute-0 podman[80208]: 2026-01-26 09:39:07.624022169 +0000 UTC m=+0.144089869 container attach cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa (image=quay.io/ceph/ceph:v19, name=wonderful_sammet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 09:39:07 compute-0 podman[80243]: 2026-01-26 09:39:07.65477205 +0000 UTC m=+0.033133616 container create 44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f (image=quay.io/ceph/ceph:v19, name=condescending_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:39:07 compute-0 systemd[1]: Started libpod-conmon-44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f.scope.
Jan 26 09:39:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:07 compute-0 podman[80243]: 2026-01-26 09:39:07.709732263 +0000 UTC m=+0.088093829 container init 44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f (image=quay.io/ceph/ceph:v19, name=condescending_jepsen, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 09:39:07 compute-0 podman[80243]: 2026-01-26 09:39:07.714324888 +0000 UTC m=+0.092686454 container start 44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f (image=quay.io/ceph/ceph:v19, name=condescending_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:07 compute-0 condescending_jepsen[80262]: 167 167
Jan 26 09:39:07 compute-0 podman[80243]: 2026-01-26 09:39:07.717619158 +0000 UTC m=+0.095980744 container attach 44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f (image=quay.io/ceph/ceph:v19, name=condescending_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:39:07 compute-0 systemd[1]: libpod-44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f.scope: Deactivated successfully.
Jan 26 09:39:07 compute-0 podman[80243]: 2026-01-26 09:39:07.718344428 +0000 UTC m=+0.096705994 container died 44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f (image=quay.io/ceph/ceph:v19, name=condescending_jepsen, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 09:39:07 compute-0 podman[80243]: 2026-01-26 09:39:07.639461521 +0000 UTC m=+0.017823107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 26 09:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-46830b6cf4794f5668316512a71d5ffb02cdc2eaa524ce63c9dd84887b550f02-merged.mount: Deactivated successfully.
Jan 26 09:39:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715542857' entity='client.admin' 
Jan 26 09:39:08 compute-0 systemd[1]: libpod-cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa.scope: Deactivated successfully.
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:08 compute-0 ceph-mon[74456]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:39:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:08 compute-0 ceph-mon[74456]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 09:39:08 compute-0 ansible-async_wrapper.py[79039]: Done in kid B.
Jan 26 09:39:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:09 compute-0 podman[80243]: 2026-01-26 09:39:09.187112966 +0000 UTC m=+1.565474562 container remove 44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f (image=quay.io/ceph/ceph:v19, name=condescending_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:39:09 compute-0 podman[80208]: 2026-01-26 09:39:09.243790584 +0000 UTC m=+1.763858294 container died cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa (image=quay.io/ceph/ceph:v19, name=wonderful_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:39:09 compute-0 sudo[80183]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e373f5f159601bff3c3b846014869c4d8a6dde542bd5144f6dc350c2fa345ad-merged.mount: Deactivated successfully.
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:09 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.zllcia (unknown last config time)...
Jan 26 09:39:09 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.zllcia (unknown last config time)...
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.zllcia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zllcia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:09 compute-0 podman[80300]: 2026-01-26 09:39:09.292489146 +0000 UTC m=+1.064881419 container remove cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa (image=quay.io/ceph/ceph:v19, name=wonderful_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:39:09 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.zllcia on compute-0
Jan 26 09:39:09 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.zllcia on compute-0
Jan 26 09:39:09 compute-0 systemd[1]: libpod-conmon-cc7e444efe54b9f1d6cb79353114832a1966737fd9246f8c62d036d546079efa.scope: Deactivated successfully.
Jan 26 09:39:09 compute-0 systemd[1]: libpod-conmon-44c6a31ad7970b630afc988d530762fc0191ca3b217297936126a2581755299f.scope: Deactivated successfully.
Jan 26 09:39:09 compute-0 sudo[80155]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:09 compute-0 sudo[80314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:39:09 compute-0 sudo[80314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:09 compute-0 sudo[80314]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:09 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/715542857' entity='client.admin' 
Jan 26 09:39:09 compute-0 ceph-mon[74456]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zllcia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 09:39:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 09:39:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:09 compute-0 sudo[80339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:39:09 compute-0 sudo[80339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:09 compute-0 sudo[80387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eegqqtntvopcldjwityyirfufztoklnu ; /usr/bin/python3'
Jan 26 09:39:09 compute-0 sudo[80387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:09 compute-0 python3[80389]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:09 compute-0 podman[80397]: 2026-01-26 09:39:09.693371523 +0000 UTC m=+0.044788345 container create af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644 (image=quay.io/ceph/ceph:v19, name=modest_proskuriakova, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:39:09 compute-0 systemd[1]: Started libpod-conmon-af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644.scope.
Jan 26 09:39:09 compute-0 podman[80418]: 2026-01-26 09:39:09.733873621 +0000 UTC m=+0.036103048 container create 65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19 (image=quay.io/ceph/ceph:v19, name=sad_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:39:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ea4f562e3d72e87c2e65261877e85ed448355866d190dede70ff1be91b13f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ea4f562e3d72e87c2e65261877e85ed448355866d190dede70ff1be91b13f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ea4f562e3d72e87c2e65261877e85ed448355866d190dede70ff1be91b13f3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:09 compute-0 podman[80397]: 2026-01-26 09:39:09.769047842 +0000 UTC m=+0.120464694 container init af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644 (image=quay.io/ceph/ceph:v19, name=modest_proskuriakova, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 09:39:09 compute-0 systemd[1]: Started libpod-conmon-65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19.scope.
Jan 26 09:39:09 compute-0 podman[80397]: 2026-01-26 09:39:09.677088269 +0000 UTC m=+0.028505111 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:09 compute-0 podman[80397]: 2026-01-26 09:39:09.775233922 +0000 UTC m=+0.126650744 container start af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644 (image=quay.io/ceph/ceph:v19, name=modest_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:39:09 compute-0 podman[80397]: 2026-01-26 09:39:09.778434279 +0000 UTC m=+0.129851101 container attach af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644 (image=quay.io/ceph/ceph:v19, name=modest_proskuriakova, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Jan 26 09:39:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:09 compute-0 podman[80418]: 2026-01-26 09:39:09.791040303 +0000 UTC m=+0.093269760 container init 65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19 (image=quay.io/ceph/ceph:v19, name=sad_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:39:09 compute-0 podman[80418]: 2026-01-26 09:39:09.796711308 +0000 UTC m=+0.098940735 container start 65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19 (image=quay.io/ceph/ceph:v19, name=sad_blackwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:39:09 compute-0 sad_blackwell[80441]: 167 167
Jan 26 09:39:09 compute-0 systemd[1]: libpod-65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19.scope: Deactivated successfully.
Jan 26 09:39:09 compute-0 podman[80418]: 2026-01-26 09:39:09.800378719 +0000 UTC m=+0.102608196 container attach 65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19 (image=quay.io/ceph/ceph:v19, name=sad_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:39:09 compute-0 podman[80418]: 2026-01-26 09:39:09.800859622 +0000 UTC m=+0.103089059 container died 65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19 (image=quay.io/ceph/ceph:v19, name=sad_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:39:09 compute-0 podman[80418]: 2026-01-26 09:39:09.718910372 +0000 UTC m=+0.021139819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1184b9f8af1f6a9e7753fbafa37f4485aa27b3a5d8d50f30a3fce41e7956e6fb-merged.mount: Deactivated successfully.
Jan 26 09:39:09 compute-0 podman[80418]: 2026-01-26 09:39:09.836541688 +0000 UTC m=+0.138771115 container remove 65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19 (image=quay.io/ceph/ceph:v19, name=sad_blackwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:39:09 compute-0 systemd[1]: libpod-conmon-65a9162fa1044ed76d4826866694def7e80c77646ea7749cfdb0f5dc803cea19.scope: Deactivated successfully.
Jan 26 09:39:09 compute-0 sudo[80339]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:39:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:09 compute-0 sudo[80475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:39:09 compute-0 sudo[80475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:09 compute-0 sudo[80475]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 26 09:39:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2239630755' entity='client.admin' 
Jan 26 09:39:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:10 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:39:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:39:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:10 compute-0 systemd[1]: libpod-af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644.scope: Deactivated successfully.
Jan 26 09:39:10 compute-0 podman[80397]: 2026-01-26 09:39:10.166748993 +0000 UTC m=+0.518165845 container died af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644 (image=quay.io/ceph/ceph:v19, name=modest_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Jan 26 09:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-94ea4f562e3d72e87c2e65261877e85ed448355866d190dede70ff1be91b13f3-merged.mount: Deactivated successfully.
Jan 26 09:39:10 compute-0 podman[80397]: 2026-01-26 09:39:10.208859585 +0000 UTC m=+0.560276407 container remove af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644 (image=quay.io/ceph/ceph:v19, name=modest_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:39:10 compute-0 sudo[80502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:39:10 compute-0 sudo[80502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:10 compute-0 systemd[1]: libpod-conmon-af2c02f24088c242f4c1d640139e292a39061977445eb996f679d97915ec5644.scope: Deactivated successfully.
Jan 26 09:39:10 compute-0 sudo[80502]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:10 compute-0 sudo[80387]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:10 compute-0 ceph-mon[74456]: Reconfiguring mgr.compute-0.zllcia (unknown last config time)...
Jan 26 09:39:10 compute-0 ceph-mon[74456]: Reconfiguring daemon mgr.compute-0.zllcia on compute-0
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2239630755' entity='client.admin' 
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:10 compute-0 sudo[80562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxeznvdjlodzenqvtbmruzqdnwvpkktt ; /usr/bin/python3'
Jan 26 09:39:10 compute-0 sudo[80562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:10 compute-0 python3[80564]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:10 compute-0 podman[80565]: 2026-01-26 09:39:10.621443172 +0000 UTC m=+0.036761036 container create 65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e (image=quay.io/ceph/ceph:v19, name=happy_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 09:39:10 compute-0 systemd[1]: Started libpod-conmon-65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e.scope.
Jan 26 09:39:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2190380616503726fbdf4d96094976ea1a3d2873b9a8d81debf899dadb6cdb0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2190380616503726fbdf4d96094976ea1a3d2873b9a8d81debf899dadb6cdb0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2190380616503726fbdf4d96094976ea1a3d2873b9a8d81debf899dadb6cdb0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:10 compute-0 podman[80565]: 2026-01-26 09:39:10.695893237 +0000 UTC m=+0.111211101 container init 65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e (image=quay.io/ceph/ceph:v19, name=happy_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:39:10 compute-0 podman[80565]: 2026-01-26 09:39:10.604001996 +0000 UTC m=+0.019319880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:10 compute-0 podman[80565]: 2026-01-26 09:39:10.704675088 +0000 UTC m=+0.119992952 container start 65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e (image=quay.io/ceph/ceph:v19, name=happy_swirles, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 09:39:10 compute-0 podman[80565]: 2026-01-26 09:39:10.707552156 +0000 UTC m=+0.122870120 container attach 65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e (image=quay.io/ceph/ceph:v19, name=happy_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:39:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 26 09:39:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1048924418' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 26 09:39:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 26 09:39:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:39:11 compute-0 ceph-mon[74456]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1048924418' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 26 09:39:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1048924418' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 26 09:39:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 26 09:39:11 compute-0 happy_swirles[80581]: set require_min_compat_client to mimic
Jan 26 09:39:11 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 26 09:39:11 compute-0 systemd[1]: libpod-65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e.scope: Deactivated successfully.
Jan 26 09:39:11 compute-0 podman[80565]: 2026-01-26 09:39:11.415419154 +0000 UTC m=+0.830737018 container died 65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e (image=quay.io/ceph/ceph:v19, name=happy_swirles, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 09:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2190380616503726fbdf4d96094976ea1a3d2873b9a8d81debf899dadb6cdb0-merged.mount: Deactivated successfully.
Jan 26 09:39:11 compute-0 podman[80565]: 2026-01-26 09:39:11.456212719 +0000 UTC m=+0.871530583 container remove 65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e (image=quay.io/ceph/ceph:v19, name=happy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:39:11 compute-0 systemd[1]: libpod-conmon-65afd468c319d4765ddc86e34a859624ba6d2a9477c07025bb4847edb84b735e.scope: Deactivated successfully.
Jan 26 09:39:11 compute-0 sudo[80562]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:11 compute-0 sudo[80642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbsmokotgsyesikxdjkotxbmucyypwi ; /usr/bin/python3'
Jan 26 09:39:11 compute-0 sudo[80642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:12 compute-0 python3[80644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:12 compute-0 podman[80645]: 2026-01-26 09:39:12.122403249 +0000 UTC m=+0.045745200 container create 37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580 (image=quay.io/ceph/ceph:v19, name=boring_hugle, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:39:12 compute-0 systemd[1]: Started libpod-conmon-37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580.scope.
Jan 26 09:39:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dae3746952209682d568d050bb18d8f89e388fe6e0e48272e3a5507f826f28/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dae3746952209682d568d050bb18d8f89e388fe6e0e48272e3a5507f826f28/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dae3746952209682d568d050bb18d8f89e388fe6e0e48272e3a5507f826f28/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:12 compute-0 podman[80645]: 2026-01-26 09:39:12.098956268 +0000 UTC m=+0.022298249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:12 compute-0 podman[80645]: 2026-01-26 09:39:12.201257716 +0000 UTC m=+0.124599707 container init 37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580 (image=quay.io/ceph/ceph:v19, name=boring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:39:12 compute-0 podman[80645]: 2026-01-26 09:39:12.206265012 +0000 UTC m=+0.129606963 container start 37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580 (image=quay.io/ceph/ceph:v19, name=boring_hugle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 26 09:39:12 compute-0 podman[80645]: 2026-01-26 09:39:12.208978916 +0000 UTC m=+0.132320867 container attach 37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580 (image=quay.io/ceph/ceph:v19, name=boring_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:39:12 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1048924418' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 26 09:39:12 compute-0 ceph-mon[74456]: osdmap e3: 0 total, 0 up, 0 in
Jan 26 09:39:12 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:39:12 compute-0 sudo[80684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:39:12 compute-0 sudo[80684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:12 compute-0 sudo[80684]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:12 compute-0 sudo[80709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 26 09:39:12 compute-0 sudo[80709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:12 compute-0 sudo[80709]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:39:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:39:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:39:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:39:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mgr[74755]: [cephadm INFO root] Added host compute-0
Jan 26 09:39:13 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 26 09:39:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:39:13 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:39:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:39:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 sudo[80755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:39:13 compute-0 sudo[80755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:39:13 compute-0 sudo[80755]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:39:13 compute-0 ceph-mon[74456]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:39:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:14 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 26 09:39:14 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 26 09:39:14 compute-0 ceph-mon[74456]: Added host compute-0
Jan 26 09:39:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:15 compute-0 ceph-mon[74456]: Deploying cephadm binary to compute-1
Jan 26 09:39:15 compute-0 ceph-mon[74456]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:39:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:39:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:39:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:39:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:39:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:39:16 compute-0 ceph-mon[74456]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:39:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:18 compute-0 ceph-mgr[74755]: [cephadm INFO root] Added host compute-1
Jan 26 09:39:18 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 26 09:39:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:39:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:39:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:19 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:19 compute-0 ceph-mon[74456]: Added host compute-1
Jan 26 09:39:19 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:19 compute-0 ceph-mon[74456]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:19 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:19 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 26 09:39:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 26 09:39:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:39:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:20 compute-0 ceph-mon[74456]: Deploying cephadm binary to compute-2
Jan 26 09:39:20 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:21 compute-0 ceph-mon[74456]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 26 09:39:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: [cephadm INFO root] Added host compute-2
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 26 09:39:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 26 09:39:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:23 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 26 09:39:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:23 compute-0 boring_hugle[80660]: Added host 'compute-0' with addr '192.168.122.100'
Jan 26 09:39:23 compute-0 boring_hugle[80660]: Added host 'compute-1' with addr '192.168.122.101'
Jan 26 09:39:23 compute-0 boring_hugle[80660]: Added host 'compute-2' with addr '192.168.122.102'
Jan 26 09:39:23 compute-0 boring_hugle[80660]: Scheduled mon update...
Jan 26 09:39:23 compute-0 boring_hugle[80660]: Scheduled mgr update...
Jan 26 09:39:23 compute-0 boring_hugle[80660]: Scheduled osd.default_drive_group update...
Jan 26 09:39:23 compute-0 systemd[1]: libpod-37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580.scope: Deactivated successfully.
Jan 26 09:39:23 compute-0 conmon[80660]: conmon 37109921c6eefa80a77c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580.scope/container/memory.events
Jan 26 09:39:23 compute-0 podman[80781]: 2026-01-26 09:39:23.483727032 +0000 UTC m=+0.040894800 container died 37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580 (image=quay.io/ceph/ceph:v19, name=boring_hugle, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 09:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-63dae3746952209682d568d050bb18d8f89e388fe6e0e48272e3a5507f826f28-merged.mount: Deactivated successfully.
Jan 26 09:39:23 compute-0 podman[80781]: 2026-01-26 09:39:23.534011687 +0000 UTC m=+0.091179415 container remove 37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580 (image=quay.io/ceph/ceph:v19, name=boring_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 09:39:23 compute-0 systemd[1]: libpod-conmon-37109921c6eefa80a77c771858b89152ed65cf29761446552c53ac4673258580.scope: Deactivated successfully.
Jan 26 09:39:23 compute-0 sudo[80642]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:23 compute-0 sudo[80819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utojfoeurtqyjtqaprqjqrakeakyxrmt ; /usr/bin/python3'
Jan 26 09:39:23 compute-0 sudo[80819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:23 compute-0 ceph-mon[74456]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:39:24 compute-0 python3[80821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:24 compute-0 podman[80823]: 2026-01-26 09:39:24.059938892 +0000 UTC m=+0.043795708 container create d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f (image=quay.io/ceph/ceph:v19, name=practical_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:39:24 compute-0 systemd[1]: Started libpod-conmon-d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f.scope.
Jan 26 09:39:24 compute-0 podman[80823]: 2026-01-26 09:39:24.039537965 +0000 UTC m=+0.023394791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1243665d14aa4da602d01e38f12eba05b0b0d473229767f60a823ca789f5a0b3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1243665d14aa4da602d01e38f12eba05b0b0d473229767f60a823ca789f5a0b3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1243665d14aa4da602d01e38f12eba05b0b0d473229767f60a823ca789f5a0b3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:24 compute-0 podman[80823]: 2026-01-26 09:39:24.169144047 +0000 UTC m=+0.153000913 container init d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f (image=quay.io/ceph/ceph:v19, name=practical_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 09:39:24 compute-0 podman[80823]: 2026-01-26 09:39:24.17949865 +0000 UTC m=+0.163355496 container start d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f (image=quay.io/ceph/ceph:v19, name=practical_shirley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:39:24 compute-0 podman[80823]: 2026-01-26 09:39:24.183901981 +0000 UTC m=+0.167758897 container attach d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f (image=quay.io/ceph/ceph:v19, name=practical_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:39:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 26 09:39:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811873022' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:39:24 compute-0 practical_shirley[80839]: 
Jan 26 09:39:24 compute-0 practical_shirley[80839]: {"fsid":"1a70b85d-e3fd-5814-8a6a-37ea00fcae30","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":57,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-26T09:38:21:975599+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-26T09:38:22.027389+0000","services":{}},"progress_events":{}}
Jan 26 09:39:24 compute-0 systemd[1]: libpod-d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f.scope: Deactivated successfully.
Jan 26 09:39:24 compute-0 podman[80823]: 2026-01-26 09:39:24.620028491 +0000 UTC m=+0.603885337 container died d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f (image=quay.io/ceph/ceph:v19, name=practical_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1243665d14aa4da602d01e38f12eba05b0b0d473229767f60a823ca789f5a0b3-merged.mount: Deactivated successfully.
Jan 26 09:39:24 compute-0 podman[80823]: 2026-01-26 09:39:24.662897983 +0000 UTC m=+0.646754809 container remove d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f (image=quay.io/ceph/ceph:v19, name=practical_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:39:24 compute-0 systemd[1]: libpod-conmon-d5e4382b83a33a94caa2c674d3729b16427511440e5e3ef36c058e5c70f7154f.scope: Deactivated successfully.
Jan 26 09:39:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:24 compute-0 sudo[80819]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:24 compute-0 ceph-mon[74456]: Added host compute-2
Jan 26 09:39:24 compute-0 ceph-mon[74456]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:24 compute-0 ceph-mon[74456]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:24 compute-0 ceph-mon[74456]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 26 09:39:24 compute-0 ceph-mon[74456]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 26 09:39:24 compute-0 ceph-mon[74456]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 26 09:39:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1811873022' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:39:24 compute-0 ceph-mon[74456]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:27 compute-0 ceph-mon[74456]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:28 compute-0 ceph-mon[74456]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:30 compute-0 ceph-mon[74456]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:32 compute-0 ceph-mon[74456]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:34 compute-0 ceph-mon[74456]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:36 compute-0 ceph-mon[74456]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:39 compute-0 ceph-mon[74456]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:40 compute-0 ceph-mon[74456]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:42 compute-0 ceph-mon[74456]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:44 compute-0 ceph-mon[74456]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:39:46
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [balancer INFO root] No pools available
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:39:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:39:46 compute-0 ceph-mon[74456]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:48 compute-0 ceph-mon[74456]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:50 compute-0 ceph-mon[74456]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:51 compute-0 sshd-session[80875]: Invalid user admin from 157.245.76.178 port 57204
Jan 26 09:39:51 compute-0 sshd-session[80875]: Connection closed by invalid user admin 157.245.76.178 port 57204 [preauth]
Jan 26 09:39:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:52 compute-0 ceph-mon[74456]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:54 compute-0 ceph-mon[74456]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:54 compute-0 sudo[80900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xspamqwvzvtjgstpajavyoggpwdbiwwa ; /usr/bin/python3'
Jan 26 09:39:54 compute-0 sudo[80900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:39:54 compute-0 python3[80902]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:39:55 compute-0 podman[80904]: 2026-01-26 09:39:55.002300127 +0000 UTC m=+0.048623318 container create 81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f (image=quay.io/ceph/ceph:v19, name=zen_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:39:55 compute-0 systemd[1]: Started libpod-conmon-81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f.scope.
Jan 26 09:39:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:39:55 compute-0 podman[80904]: 2026-01-26 09:39:54.980774567 +0000 UTC m=+0.027097798 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a264ac52ba05254d72e33621e77f08646b4ad19a37ace1e4b3852a4db265e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a264ac52ba05254d72e33621e77f08646b4ad19a37ace1e4b3852a4db265e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a264ac52ba05254d72e33621e77f08646b4ad19a37ace1e4b3852a4db265e2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:39:55 compute-0 podman[80904]: 2026-01-26 09:39:55.095836183 +0000 UTC m=+0.142159424 container init 81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f (image=quay.io/ceph/ceph:v19, name=zen_franklin, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:39:55 compute-0 podman[80904]: 2026-01-26 09:39:55.102928361 +0000 UTC m=+0.149251552 container start 81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f (image=quay.io/ceph/ceph:v19, name=zen_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Jan 26 09:39:55 compute-0 podman[80904]: 2026-01-26 09:39:55.106411064 +0000 UTC m=+0.152734255 container attach 81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f (image=quay.io/ceph/ceph:v19, name=zen_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:39:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 26 09:39:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1927493537' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:39:55 compute-0 zen_franklin[80921]: 
Jan 26 09:39:55 compute-0 zen_franklin[80921]: {"fsid":"1a70b85d-e3fd-5814-8a6a-37ea00fcae30","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":88,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-26T09:38:21:975599+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-26T09:39:48.684414+0000","services":{}},"progress_events":{}}
Jan 26 09:39:55 compute-0 systemd[1]: libpod-81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f.scope: Deactivated successfully.
Jan 26 09:39:55 compute-0 conmon[80921]: conmon 81d85e1145beb1f9d9e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f.scope/container/memory.events
Jan 26 09:39:55 compute-0 podman[80904]: 2026-01-26 09:39:55.539884931 +0000 UTC m=+0.586208102 container died 81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f (image=quay.io/ceph/ceph:v19, name=zen_franklin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a264ac52ba05254d72e33621e77f08646b4ad19a37ace1e4b3852a4db265e2-merged.mount: Deactivated successfully.
Jan 26 09:39:55 compute-0 podman[80904]: 2026-01-26 09:39:55.579642843 +0000 UTC m=+0.625966024 container remove 81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f (image=quay.io/ceph/ceph:v19, name=zen_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:39:55 compute-0 systemd[1]: libpod-conmon-81d85e1145beb1f9d9e52391949e893fa027c8ba6ad68721c842a916b214af5f.scope: Deactivated successfully.
Jan 26 09:39:55 compute-0 sudo[80900]: pam_unix(sudo:session): session closed for user root
Jan 26 09:39:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1927493537' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:39:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:56 compute-0 ceph-mon[74456]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:39:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:39:58 compute-0 ceph-mon[74456]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:01 compute-0 ceph-mon[74456]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:02 compute-0 ceph-mon[74456]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 09:40:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:40:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:40:03 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:40:03 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:40:03 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:40:03 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:40:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:40:04 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:40:04 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:40:04 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:40:04 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:40:04 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:40:04 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:40:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:05 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:40:05 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:40:05 compute-0 ceph-mon[74456]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:40:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev b6241c50-4d57-4a51-ac5e-9adb1c52e306 (Updating crash deployment (+1 -> 2))
Jan 26 09:40:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 26 09:40:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:40:05.207+0000 7ff4c081a640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: service_name: mon
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: placement:
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   hosts:
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   - compute-0
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   - compute-1
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   - compute-2
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:40:05.208+0000 7ff4c081a640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: service_name: mgr
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: placement:
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   hosts:
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   - compute-0
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   - compute-1
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   - compute-2
Jan 26 09:40:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 09:40:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 09:40:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 26 09:40:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 26 09:40:06 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 26 09:40:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:06 compute-0 ceph-mon[74456]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 09:40:06 compute-0 ceph-mon[74456]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:06 compute-0 ceph-mon[74456]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 09:40:06 compute-0 ceph-mon[74456]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:40:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 09:40:06 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:06 compute-0 ceph-mon[74456]: Deploying daemon crash.compute-1 on compute-1
Jan 26 09:40:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:07 compute-0 ceph-mon[74456]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 26 09:40:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev b6241c50-4d57-4a51-ac5e-9adb1c52e306 (Updating crash deployment (+1 -> 2))
Jan 26 09:40:08 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event b6241c50-4d57-4a51-ac5e-9adb1c52e306 (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:08 compute-0 sudo[80957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:40:08 compute-0 sudo[80957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:08 compute-0 sudo[80957]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:08 compute-0 sudo[80982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:40:08 compute-0 sudo[80982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:08 compute-0 ceph-mon[74456]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:40:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:08 compute-0 podman[81046]: 2026-01-26 09:40:08.58257615 +0000 UTC m=+0.034193077 container create b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:40:08 compute-0 systemd[1]: Started libpod-conmon-b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa.scope.
Jan 26 09:40:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:08 compute-0 podman[81046]: 2026-01-26 09:40:08.636806386 +0000 UTC m=+0.088423333 container init b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:40:08 compute-0 podman[81046]: 2026-01-26 09:40:08.641849719 +0000 UTC m=+0.093466646 container start b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:40:08 compute-0 peaceful_golick[81063]: 167 167
Jan 26 09:40:08 compute-0 podman[81046]: 2026-01-26 09:40:08.644956621 +0000 UTC m=+0.096573578 container attach b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:40:08 compute-0 systemd[1]: libpod-b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa.scope: Deactivated successfully.
Jan 26 09:40:08 compute-0 podman[81046]: 2026-01-26 09:40:08.645844435 +0000 UTC m=+0.097461362 container died b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 09:40:08 compute-0 podman[81046]: 2026-01-26 09:40:08.567322606 +0000 UTC m=+0.018939553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2d04d41f7231de68d4e0448ce8ca21525fd67f953e0b840d7b06955ccb9245d-merged.mount: Deactivated successfully.
Jan 26 09:40:08 compute-0 podman[81046]: 2026-01-26 09:40:08.677426851 +0000 UTC m=+0.129043778 container remove b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:40:08 compute-0 systemd[1]: libpod-conmon-b8f908731920b763cf78d7aaa58c1ba2ab508f219c6ffbf0daa1d118157ed7aa.scope: Deactivated successfully.
Jan 26 09:40:08 compute-0 podman[81087]: 2026-01-26 09:40:08.811388547 +0000 UTC m=+0.037436922 container create 3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 09:40:08 compute-0 systemd[1]: Started libpod-conmon-3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8.scope.
Jan 26 09:40:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085da4a50b28b326197a981553c0dc46836df8a8a2ab2a8485c9b80c2deca0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085da4a50b28b326197a981553c0dc46836df8a8a2ab2a8485c9b80c2deca0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085da4a50b28b326197a981553c0dc46836df8a8a2ab2a8485c9b80c2deca0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085da4a50b28b326197a981553c0dc46836df8a8a2ab2a8485c9b80c2deca0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b085da4a50b28b326197a981553c0dc46836df8a8a2ab2a8485c9b80c2deca0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:08 compute-0 podman[81087]: 2026-01-26 09:40:08.886404874 +0000 UTC m=+0.112453269 container init 3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:40:08 compute-0 podman[81087]: 2026-01-26 09:40:08.793737101 +0000 UTC m=+0.019785496 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:08 compute-0 podman[81087]: 2026-01-26 09:40:08.894684913 +0000 UTC m=+0.120733288 container start 3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 09:40:08 compute-0 podman[81087]: 2026-01-26 09:40:08.898249467 +0000 UTC m=+0.124297842 container attach 3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:09 compute-0 frosty_germain[81103]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ac85653c-ceaa-4fd5-80ce-94914596ed49
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ac85653c-ceaa-4fd5-80ce-94914596ed49"} v 0)
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3266997153' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac85653c-ceaa-4fd5-80ce-94914596ed49"}]: dispatch
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3266997153' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac85653c-ceaa-4fd5-80ce-94914596ed49"}]': finished
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:09 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8ed5e8fe-4547-4eab-be95-05fd5f9f3f95"} v 0)
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580711276' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8ed5e8fe-4547-4eab-be95-05fd5f9f3f95"}]: dispatch
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2580711276' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8ed5e8fe-4547-4eab-be95-05fd5f9f3f95"}]': finished
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:09 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:09 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:09 compute-0 lvm[81164]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:40:09 compute-0 lvm[81164]: VG ceph_vg0 finished
Jan 26 09:40:09 compute-0 frosty_germain[81103]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 26 09:40:10 compute-0 ceph-mon[74456]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:10 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3266997153' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ac85653c-ceaa-4fd5-80ce-94914596ed49"}]: dispatch
Jan 26 09:40:10 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3266997153' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ac85653c-ceaa-4fd5-80ce-94914596ed49"}]': finished
Jan 26 09:40:10 compute-0 ceph-mon[74456]: osdmap e4: 1 total, 0 up, 1 in
Jan 26 09:40:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:10 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2580711276' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8ed5e8fe-4547-4eab-be95-05fd5f9f3f95"}]: dispatch
Jan 26 09:40:10 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2580711276' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8ed5e8fe-4547-4eab-be95-05fd5f9f3f95"}]': finished
Jan 26 09:40:10 compute-0 ceph-mon[74456]: osdmap e5: 2 total, 0 up, 2 in
Jan 26 09:40:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:10 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 26 09:40:10 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255361835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 26 09:40:10 compute-0 frosty_germain[81103]:  stderr: got monmap epoch 1
Jan 26 09:40:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 26 09:40:10 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1189472396' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 26 09:40:10 compute-0 frosty_germain[81103]: --> Creating keyring file for osd.0
Jan 26 09:40:10 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 26 09:40:10 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 26 09:40:10 compute-0 frosty_germain[81103]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ac85653c-ceaa-4fd5-80ce-94914596ed49 --setuser ceph --setgroup ceph
Jan 26 09:40:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:11 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 26 09:40:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2255361835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 26 09:40:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1189472396' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 26 09:40:11 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 2 completed events
Jan 26 09:40:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:40:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:12 compute-0 ceph-mon[74456]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:12 compute-0 ceph-mon[74456]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 26 09:40:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:13 compute-0 frosty_germain[81103]:  stderr: 2026-01-26T09:40:10.549+0000 7fbac61f5740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 26 09:40:13 compute-0 frosty_germain[81103]:  stderr: 2026-01-26T09:40:10.818+0000 7fbac61f5740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 26 09:40:13 compute-0 frosty_germain[81103]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 26 09:40:13 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 26 09:40:13 compute-0 frosty_germain[81103]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 26 09:40:14 compute-0 frosty_germain[81103]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:14 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:14 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 09:40:14 compute-0 frosty_germain[81103]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 26 09:40:14 compute-0 frosty_germain[81103]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 26 09:40:14 compute-0 frosty_germain[81103]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 26 09:40:14 compute-0 systemd[1]: libpod-3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8.scope: Deactivated successfully.
Jan 26 09:40:14 compute-0 systemd[1]: libpod-3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8.scope: Consumed 2.161s CPU time.
Jan 26 09:40:14 compute-0 podman[82076]: 2026-01-26 09:40:14.316128095 +0000 UTC m=+0.025949318 container died 3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:14 compute-0 ceph-mon[74456]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b085da4a50b28b326197a981553c0dc46836df8a8a2ab2a8485c9b80c2deca0b-merged.mount: Deactivated successfully.
Jan 26 09:40:14 compute-0 podman[82076]: 2026-01-26 09:40:14.357902851 +0000 UTC m=+0.067724064 container remove 3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_germain, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 09:40:14 compute-0 systemd[1]: libpod-conmon-3b881f7d672b1da48540ce7a06df892cd23161d4374dfeb3d3e91663018b88a8.scope: Deactivated successfully.
Jan 26 09:40:14 compute-0 sudo[80982]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:14 compute-0 sudo[82091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:40:14 compute-0 sudo[82091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:14 compute-0 sudo[82091]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:14 compute-0 sudo[82116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:40:14 compute-0 sudo[82116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:14 compute-0 podman[82181]: 2026-01-26 09:40:14.875540686 +0000 UTC m=+0.045335531 container create 68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:40:14 compute-0 systemd[1]: Started libpod-conmon-68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7.scope.
Jan 26 09:40:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:14 compute-0 podman[82181]: 2026-01-26 09:40:14.943399363 +0000 UTC m=+0.113194228 container init 68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:40:14 compute-0 podman[82181]: 2026-01-26 09:40:14.852186428 +0000 UTC m=+0.021981353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:14 compute-0 podman[82181]: 2026-01-26 09:40:14.949870994 +0000 UTC m=+0.119665879 container start 68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:40:14 compute-0 podman[82181]: 2026-01-26 09:40:14.953314296 +0000 UTC m=+0.123109161 container attach 68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:40:14 compute-0 loving_murdock[82197]: 167 167
Jan 26 09:40:14 compute-0 systemd[1]: libpod-68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7.scope: Deactivated successfully.
Jan 26 09:40:14 compute-0 podman[82181]: 2026-01-26 09:40:14.955163194 +0000 UTC m=+0.124958099 container died 68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d895b1dd4d3b4f5a2e2ec43540acac7c03a5e3cc87670f661e8b1bb5caa3da51-merged.mount: Deactivated successfully.
Jan 26 09:40:14 compute-0 podman[82181]: 2026-01-26 09:40:14.993595083 +0000 UTC m=+0.163389968 container remove 68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:40:15 compute-0 systemd[1]: libpod-conmon-68d5ed4f732e09d5aae7df186ec866dbd3e635fa13f3b902c3da52a065119db7.scope: Deactivated successfully.
Jan 26 09:40:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 26 09:40:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 09:40:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:15 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:15 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 26 09:40:15 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 26 09:40:15 compute-0 podman[82221]: 2026-01-26 09:40:15.165135514 +0000 UTC m=+0.040072453 container create 4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:40:15 compute-0 systemd[1]: Started libpod-conmon-4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623.scope.
Jan 26 09:40:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c2f216fe66a53fb3f3c325df91c699720efc54b7f3c317af42d56007a06faa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c2f216fe66a53fb3f3c325df91c699720efc54b7f3c317af42d56007a06faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c2f216fe66a53fb3f3c325df91c699720efc54b7f3c317af42d56007a06faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c2f216fe66a53fb3f3c325df91c699720efc54b7f3c317af42d56007a06faa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:15 compute-0 podman[82221]: 2026-01-26 09:40:15.228629195 +0000 UTC m=+0.103566134 container init 4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shtern, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 09:40:15 compute-0 podman[82221]: 2026-01-26 09:40:15.147529948 +0000 UTC m=+0.022466877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:15 compute-0 podman[82221]: 2026-01-26 09:40:15.244616638 +0000 UTC m=+0.119553557 container start 4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:40:15 compute-0 podman[82221]: 2026-01-26 09:40:15.248281656 +0000 UTC m=+0.123218565 container attach 4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shtern, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:15 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 09:40:15 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:15 compute-0 blissful_shtern[82238]: {
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:     "0": [
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:         {
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "devices": [
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "/dev/loop3"
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             ],
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "lv_name": "ceph_lv0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "lv_size": "21470642176",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "name": "ceph_lv0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "tags": {
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.cluster_name": "ceph",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.crush_device_class": "",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.encrypted": "0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.osd_id": "0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.type": "block",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.vdo": "0",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:                 "ceph.with_tpm": "0"
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             },
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "type": "block",
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:             "vg_name": "ceph_vg0"
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:         }
Jan 26 09:40:15 compute-0 blissful_shtern[82238]:     ]
Jan 26 09:40:15 compute-0 blissful_shtern[82238]: }
Jan 26 09:40:15 compute-0 systemd[1]: libpod-4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623.scope: Deactivated successfully.
Jan 26 09:40:15 compute-0 podman[82247]: 2026-01-26 09:40:15.608004609 +0000 UTC m=+0.030895929 container died 4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3c2f216fe66a53fb3f3c325df91c699720efc54b7f3c317af42d56007a06faa-merged.mount: Deactivated successfully.
Jan 26 09:40:15 compute-0 podman[82247]: 2026-01-26 09:40:15.642677317 +0000 UTC m=+0.065568607 container remove 4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_shtern, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 09:40:15 compute-0 systemd[1]: libpod-conmon-4b77a688fd815873d5a8f9278489d6ed54da1f052928a0842e3c5386a324e623.scope: Deactivated successfully.
Jan 26 09:40:15 compute-0 sudo[82116]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 26 09:40:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 09:40:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:15 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:15 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 26 09:40:15 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 26 09:40:15 compute-0 sudo[82261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:40:15 compute-0 sudo[82261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:15 compute-0 sudo[82261]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:15 compute-0 sudo[82286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:40:15 compute-0 sudo[82286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:16 compute-0 podman[82349]: 2026-01-26 09:40:16.211820037 +0000 UTC m=+0.035093800 container create 0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:40:16 compute-0 systemd[1]: Started libpod-conmon-0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc.scope.
Jan 26 09:40:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:16 compute-0 podman[82349]: 2026-01-26 09:40:16.19682471 +0000 UTC m=+0.020098463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:16 compute-0 podman[82349]: 2026-01-26 09:40:16.30069314 +0000 UTC m=+0.123966913 container init 0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:40:16 compute-0 podman[82349]: 2026-01-26 09:40:16.312733939 +0000 UTC m=+0.136007732 container start 0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Jan 26 09:40:16 compute-0 podman[82349]: 2026-01-26 09:40:16.316494098 +0000 UTC m=+0.139767851 container attach 0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:40:16 compute-0 focused_mclean[82365]: 167 167
Jan 26 09:40:16 compute-0 systemd[1]: libpod-0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc.scope: Deactivated successfully.
Jan 26 09:40:16 compute-0 ceph-mon[74456]: Deploying daemon osd.1 on compute-1
Jan 26 09:40:16 compute-0 ceph-mon[74456]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 09:40:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:16 compute-0 ceph-mon[74456]: Deploying daemon osd.0 on compute-0
Jan 26 09:40:16 compute-0 podman[82370]: 2026-01-26 09:40:16.359725472 +0000 UTC m=+0.022471075 container died 0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-872b6a8d42355c4f0a1346fbe9206ca668f4cea9086a52bc2fd2fdb2360fb8e4-merged.mount: Deactivated successfully.
Jan 26 09:40:16 compute-0 podman[82370]: 2026-01-26 09:40:16.390790695 +0000 UTC m=+0.053536308 container remove 0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_mclean, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:40:16 compute-0 systemd[1]: libpod-conmon-0229c38ce0e4b0426c73118f76eb4936bd20ea13a25e1e0fcfb55297d31a3bfc.scope: Deactivated successfully.
Jan 26 09:40:16 compute-0 podman[82396]: 2026-01-26 09:40:16.642399567 +0000 UTC m=+0.035601223 container create d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:40:16 compute-0 systemd[1]: Started libpod-conmon-d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389.scope.
Jan 26 09:40:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce177e8ae191be897b2bd8d688c1390228d464774ba753d7127f568f988f922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce177e8ae191be897b2bd8d688c1390228d464774ba753d7127f568f988f922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce177e8ae191be897b2bd8d688c1390228d464774ba753d7127f568f988f922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce177e8ae191be897b2bd8d688c1390228d464774ba753d7127f568f988f922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce177e8ae191be897b2bd8d688c1390228d464774ba753d7127f568f988f922/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:16 compute-0 podman[82396]: 2026-01-26 09:40:16.704459541 +0000 UTC m=+0.097661207 container init d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:40:16 compute-0 podman[82396]: 2026-01-26 09:40:16.712134403 +0000 UTC m=+0.105336069 container start d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:40:16 compute-0 podman[82396]: 2026-01-26 09:40:16.714735162 +0000 UTC m=+0.107936858 container attach d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:40:16 compute-0 podman[82396]: 2026-01-26 09:40:16.628116609 +0000 UTC m=+0.021318295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:40:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:40:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:40:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:40:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:40:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:40:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test[82412]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 26 09:40:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test[82412]:                             [--no-systemd] [--no-tmpfs]
Jan 26 09:40:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test[82412]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 26 09:40:16 compute-0 systemd[1]: libpod-d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389.scope: Deactivated successfully.
Jan 26 09:40:16 compute-0 podman[82396]: 2026-01-26 09:40:16.876583257 +0000 UTC m=+0.269784923 container died d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ce177e8ae191be897b2bd8d688c1390228d464774ba753d7127f568f988f922-merged.mount: Deactivated successfully.
Jan 26 09:40:16 compute-0 podman[82396]: 2026-01-26 09:40:16.911335737 +0000 UTC m=+0.304537403 container remove d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:40:16 compute-0 systemd[1]: libpod-conmon-d30b1c524e6d5d149c33bf1ac803b9c2d0c017c9a3434d50d2866022d3d12389.scope: Deactivated successfully.
Jan 26 09:40:17 compute-0 systemd[1]: Reloading.
Jan 26 09:40:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:17 compute-0 systemd-sysv-generator[82476]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:40:17 compute-0 systemd-rc-local-generator[82472]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:40:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:17 compute-0 systemd[1]: Reloading.
Jan 26 09:40:17 compute-0 systemd-sysv-generator[82516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:40:17 compute-0 systemd-rc-local-generator[82512]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:40:17 compute-0 systemd[1]: Starting Ceph osd.0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:40:17 compute-0 podman[82575]: 2026-01-26 09:40:17.842383039 +0000 UTC m=+0.023491384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:18 compute-0 podman[82575]: 2026-01-26 09:40:18.175478788 +0000 UTC m=+0.356587093 container create 9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:40:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44242985cb5328256f68ee08a8f58539ae3bc9e47966154c9b94c04c28605ea3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44242985cb5328256f68ee08a8f58539ae3bc9e47966154c9b94c04c28605ea3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44242985cb5328256f68ee08a8f58539ae3bc9e47966154c9b94c04c28605ea3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44242985cb5328256f68ee08a8f58539ae3bc9e47966154c9b94c04c28605ea3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44242985cb5328256f68ee08a8f58539ae3bc9e47966154c9b94c04c28605ea3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:18 compute-0 podman[82575]: 2026-01-26 09:40:18.249052316 +0000 UTC m=+0.430160681 container init 9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:40:18 compute-0 podman[82575]: 2026-01-26 09:40:18.256879263 +0000 UTC m=+0.437987568 container start 9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:18 compute-0 podman[82575]: 2026-01-26 09:40:18.261456895 +0000 UTC m=+0.442565160 container attach 9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:40:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:18 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:18 compute-0 ceph-mon[74456]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:18 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:18 compute-0 lvm[82671]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:40:18 compute-0 lvm[82671]: VG ceph_vg0 finished
Jan 26 09:40:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 26 09:40:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:18 compute-0 bash[82575]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 26 09:40:18 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:18 compute-0 bash[82575]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 09:40:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 26 09:40:19 compute-0 bash[82575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 26 09:40:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 26 09:40:19 compute-0 bash[82575]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 26 09:40:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:19 compute-0 bash[82575]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:19 compute-0 bash[82575]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 09:40:19 compute-0 bash[82575]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 09:40:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 26 09:40:19 compute-0 bash[82575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 26 09:40:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate[82590]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 26 09:40:19 compute-0 bash[82575]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 26 09:40:19 compute-0 systemd[1]: libpod-9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645.scope: Deactivated successfully.
Jan 26 09:40:19 compute-0 systemd[1]: libpod-9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645.scope: Consumed 1.294s CPU time.
Jan 26 09:40:19 compute-0 podman[82575]: 2026-01-26 09:40:19.451821111 +0000 UTC m=+1.632929376 container died 9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-44242985cb5328256f68ee08a8f58539ae3bc9e47966154c9b94c04c28605ea3-merged.mount: Deactivated successfully.
Jan 26 09:40:19 compute-0 podman[82575]: 2026-01-26 09:40:19.507753922 +0000 UTC m=+1.688862187 container remove 9b4312e14698133f5171e19e2d56f0f71c750f306bd222beb6b95f0a62e24645 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:40:19 compute-0 podman[82821]: 2026-01-26 09:40:19.801926171 +0000 UTC m=+0.093955229 container create cb8bebf3475b3bfb643d0a307f1584c5f3c09421b51401c040cb5d7a8db9c240 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:40:19 compute-0 podman[82821]: 2026-01-26 09:40:19.760848483 +0000 UTC m=+0.052877621 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1f99d191663a457a671230285638c484ec5b63c8f91a734fc4bcf3d5dff8ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1f99d191663a457a671230285638c484ec5b63c8f91a734fc4bcf3d5dff8ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1f99d191663a457a671230285638c484ec5b63c8f91a734fc4bcf3d5dff8ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1f99d191663a457a671230285638c484ec5b63c8f91a734fc4bcf3d5dff8ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d1f99d191663a457a671230285638c484ec5b63c8f91a734fc4bcf3d5dff8ef/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:19 compute-0 podman[82821]: 2026-01-26 09:40:19.881406776 +0000 UTC m=+0.173435854 container init cb8bebf3475b3bfb643d0a307f1584c5f3c09421b51401c040cb5d7a8db9c240 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:40:19 compute-0 podman[82821]: 2026-01-26 09:40:19.896177906 +0000 UTC m=+0.188206984 container start cb8bebf3475b3bfb643d0a307f1584c5f3c09421b51401c040cb5d7a8db9c240 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:40:19 compute-0 bash[82821]: cb8bebf3475b3bfb643d0a307f1584c5f3c09421b51401c040cb5d7a8db9c240
Jan 26 09:40:19 compute-0 systemd[1]: Started Ceph osd.0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:40:19 compute-0 sudo[82286]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:40:19 compute-0 ceph-osd[82841]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 09:40:19 compute-0 ceph-osd[82841]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Jan 26 09:40:19 compute-0 ceph-osd[82841]: pidfile_write: ignore empty --pid-file
Jan 26 09:40:19 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:19 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:19 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:19 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:19 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:40:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:20 compute-0 sudo[82853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:40:20 compute-0 sudo[82853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:20 compute-0 sudo[82853]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:20 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:20 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:20 compute-0 ceph-mon[74456]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:20 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:20 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:20 compute-0 sudo[82878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:40:20 compute-0 sudo[82878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:20 compute-0 podman[82949]: 2026-01-26 09:40:20.535684389 +0000 UTC m=+0.045286991 container create c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_driscoll, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:40:20 compute-0 systemd[1]: Started libpod-conmon-c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f.scope.
Jan 26 09:40:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:20 compute-0 podman[82949]: 2026-01-26 09:40:20.517090567 +0000 UTC m=+0.026693199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:20 compute-0 podman[82949]: 2026-01-26 09:40:20.628931967 +0000 UTC m=+0.138534599 container init c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_driscoll, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:40:20 compute-0 podman[82949]: 2026-01-26 09:40:20.637549846 +0000 UTC m=+0.147152458 container start c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:20 compute-0 podman[82949]: 2026-01-26 09:40:20.641267524 +0000 UTC m=+0.150870136 container attach c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:20 compute-0 gracious_driscoll[82968]: 167 167
Jan 26 09:40:20 compute-0 systemd[1]: libpod-c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f.scope: Deactivated successfully.
Jan 26 09:40:20 compute-0 podman[82949]: 2026-01-26 09:40:20.646121743 +0000 UTC m=+0.155724355 container died c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-10021e85babcc7183b27b0fb124ecd2eb39d6ff16f798969ecab31b9aa1a1980-merged.mount: Deactivated successfully.
Jan 26 09:40:20 compute-0 podman[82949]: 2026-01-26 09:40:20.685389922 +0000 UTC m=+0.194992534 container remove c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:40:20 compute-0 systemd[1]: libpod-conmon-c782e6d4df560ca1dfebc19df49f9d1703233c3765c51e66cc9c10a1d3e97a3f.scope: Deactivated successfully.
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:20 compute-0 ceph-osd[82841]: bdev(0x55c5bbd31800 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:20 compute-0 podman[82992]: 2026-01-26 09:40:20.872458875 +0000 UTC m=+0.063434410 container create 11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:40:20 compute-0 systemd[1]: Started libpod-conmon-11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4.scope.
Jan 26 09:40:20 compute-0 podman[82992]: 2026-01-26 09:40:20.850405202 +0000 UTC m=+0.041380777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0afb2f99dbdf001cc9803a50599342e9ae3e29b0dd19cf21648a4fb356cccf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0afb2f99dbdf001cc9803a50599342e9ae3e29b0dd19cf21648a4fb356cccf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0afb2f99dbdf001cc9803a50599342e9ae3e29b0dd19cf21648a4fb356cccf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0afb2f99dbdf001cc9803a50599342e9ae3e29b0dd19cf21648a4fb356cccf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:20 compute-0 podman[82992]: 2026-01-26 09:40:20.993958282 +0000 UTC m=+0.184933927 container init 11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:40:21 compute-0 podman[82992]: 2026-01-26 09:40:21.007271555 +0000 UTC m=+0.198247100 container start 11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:40:21 compute-0 podman[82992]: 2026-01-26 09:40:21.011344643 +0000 UTC m=+0.202320208 container attach 11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:21 compute-0 ceph-osd[82841]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 26 09:40:21 compute-0 ceph-osd[82841]: load: jerasure load: lrc 
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:21 compute-0 lvm[83097]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:40:21 compute-0 lvm[83097]: VG ceph_vg0 finished
Jan 26 09:40:21 compute-0 ceph-osd[82841]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 26 09:40:21 compute-0 ceph-osd[82841]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:21 compute-0 sharp_margulis[83015]: {}
Jan 26 09:40:21 compute-0 systemd[1]: libpod-11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4.scope: Deactivated successfully.
Jan 26 09:40:21 compute-0 systemd[1]: libpod-11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4.scope: Consumed 1.200s CPU time.
Jan 26 09:40:21 compute-0 podman[82992]: 2026-01-26 09:40:21.785590593 +0000 UTC m=+0.976566168 container died 11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0afb2f99dbdf001cc9803a50599342e9ae3e29b0dd19cf21648a4fb356cccf4-merged.mount: Deactivated successfully.
Jan 26 09:40:21 compute-0 podman[82992]: 2026-01-26 09:40:21.829613248 +0000 UTC m=+1.020588783 container remove 11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:40:21 compute-0 systemd[1]: libpod-conmon-11a41c2445e01845caf2bd7b151f524eb942905021f8691e5e2390d47386e8c4.scope: Deactivated successfully.
Jan 26 09:40:21 compute-0 sudo[82878]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:40:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:40:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:21 compute-0 sudo[83126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:40:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 26 09:40:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 26 09:40:21 compute-0 sudo[83126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:21 compute-0 sudo[83126]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:21 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd6c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount shared_bdev_used = 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: RocksDB version: 7.9.2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Git sha 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: DB SUMMARY
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: DB Session ID:  31IN549TI4KU578E1WXC
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: CURRENT file:  CURRENT
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.error_if_exists: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.create_if_missing: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                     Options.env: 0x55c5bcba7dc0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                Options.info_log: 0x55c5bcbab7a0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                              Options.statistics: (nil)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.use_fsync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                              Options.db_log_dir: 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                 Options.wal_dir: db.wal
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.write_buffer_manager: 0x55c5bcca2a00
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.unordered_write: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.row_cache: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                              Options.wal_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.two_write_queues: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.wal_compression: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.atomic_flush: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_background_jobs: 4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_background_compactions: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_subcompactions: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.max_open_files: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Compression algorithms supported:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kZSTD supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kXpressCompression supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kBZip2Compression supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kLZ4Compression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kZlibCompression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kLZ4HCCompression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kSnappyCompression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 68f64268-147c-431c-8a30-46372a2f535f
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420422039571, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420422039775, "job": 1, "event": "recovery_finished"}
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: freelist init
Jan 26 09:40:22 compute-0 ceph-osd[82841]: freelist _read_cfg
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs umount
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) close
Jan 26 09:40:22 compute-0 sudo[83340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:40:22 compute-0 sudo[83340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:22 compute-0 sudo[83340]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:22 compute-0 sudo[83365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:40:22 compute-0 sudo[83365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bdev(0x55c5bcbd7000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluefs mount shared_bdev_used = 4718592
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: RocksDB version: 7.9.2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Git sha 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: DB SUMMARY
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: DB Session ID:  31IN549TI4KU578E1WXD
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: CURRENT file:  CURRENT
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.error_if_exists: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.create_if_missing: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                     Options.env: 0x55c5bcd462a0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                Options.info_log: 0x55c5bcbabb20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                              Options.statistics: (nil)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.use_fsync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                              Options.db_log_dir: 
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                                 Options.wal_dir: db.wal
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.write_buffer_manager: 0x55c5bcca2a00
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.unordered_write: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.row_cache: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                              Options.wal_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.two_write_queues: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.wal_compression: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.atomic_flush: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_background_jobs: 4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_background_compactions: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_subcompactions: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.max_open_files: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Compression algorithms supported:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kZSTD supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kXpressCompression supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kBZip2Compression supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kLZ4Compression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kZlibCompression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kLZ4HCCompression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         kSnappyCompression supported: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-mon[74456]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:22 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:22 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:22 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:22 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:22 compute-0 ceph-mon[74456]: from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:           Options.merge_operator: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5bcbabac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c5bbdc69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.compression: LZ4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.num_levels: 7
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.bloom_locality: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                               Options.ttl: 2592000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                       Options.enable_blob_files: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                           Options.min_blob_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 68f64268-147c-431c-8a30-46372a2f535f
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420422315878, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420422319782, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420422, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68f64268-147c-431c-8a30-46372a2f535f", "db_session_id": "31IN549TI4KU578E1WXD", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420422322208, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420422, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68f64268-147c-431c-8a30-46372a2f535f", "db_session_id": "31IN549TI4KU578E1WXD", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420422324436, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420422, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68f64268-147c-431c-8a30-46372a2f535f", "db_session_id": "31IN549TI4KU578E1WXD", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420422325841, "job": 1, "event": "recovery_finished"}
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c5bcda8000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: DB pointer 0x55c5bcd52000
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 26 09:40:22 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 09:40:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 09:40:22 compute-0 ceph-osd[82841]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 26 09:40:22 compute-0 ceph-osd[82841]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 26 09:40:22 compute-0 ceph-osd[82841]: _get_class not permitted to load lua
Jan 26 09:40:22 compute-0 ceph-osd[82841]: _get_class not permitted to load sdk
Jan 26 09:40:22 compute-0 ceph-osd[82841]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 26 09:40:22 compute-0 ceph-osd[82841]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 26 09:40:22 compute-0 ceph-osd[82841]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 26 09:40:22 compute-0 ceph-osd[82841]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 26 09:40:22 compute-0 ceph-osd[82841]: osd.0 0 load_pgs
Jan 26 09:40:22 compute-0 ceph-osd[82841]: osd.0 0 load_pgs opened 0 pgs
Jan 26 09:40:22 compute-0 ceph-osd[82841]: osd.0 0 log_to_monitors true
Jan 26 09:40:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0[82837]: 2026-01-26T09:40:22.370+0000 7f964906b740 -1 osd.0 0 log_to_monitors true
Jan 26 09:40:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 26 09:40:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 26 09:40:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:40:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:22 compute-0 podman[83675]: 2026-01-26 09:40:22.852453869 +0000 UTC m=+0.084264552 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 09:40:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 26 09:40:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:23 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 podman[83695]: 2026-01-26 09:40:23.152056031 +0000 UTC m=+0.194180991 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:40:23 compute-0 podman[83675]: 2026-01-26 09:40:23.157726433 +0000 UTC m=+0.389537056 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:40:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 26 09:40:23 compute-0 ceph-mon[74456]: osdmap e6: 2 total, 0 up, 2 in
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 26 09:40:23 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 26 09:40:23 compute-0 sudo[83365]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:40:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:23 compute-0 sudo[83761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:40:23 compute-0 sudo[83761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:23 compute-0 sudo[83761]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:23 compute-0 sudo[83786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:40:23 compute-0 sudo[83786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 26 09:40:24 compute-0 ceph-osd[82841]: osd.0 0 done with init, starting boot process
Jan 26 09:40:24 compute-0 ceph-osd[82841]: osd.0 0 start_boot
Jan 26 09:40:24 compute-0 ceph-osd[82841]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 26 09:40:24 compute-0 ceph-osd[82841]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 26 09:40:24 compute-0 ceph-osd[82841]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 26 09:40:24 compute-0 ceph-osd[82841]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 26 09:40:24 compute-0 ceph-osd[82841]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:24 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:24 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3688054713; not ready for session (expect reconnect)
Jan 26 09:40:24 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3944242284; not ready for session (expect reconnect)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:24 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:24 compute-0 sudo[83786]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:24 compute-0 sudo[83843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:40:24 compute-0 sudo[83843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:24 compute-0 sudo[83843]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:24 compute-0 sudo[83868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- inventory --format=json-pretty --filter-for-batch
Jan 26 09:40:24 compute-0 sudo[83868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:40:24 compute-0 ceph-mon[74456]: pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 26 09:40:24 compute-0 ceph-mon[74456]: osdmap e7: 2 total, 0 up, 2 in
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:24 compute-0 podman[83930]: 2026-01-26 09:40:24.813387039 +0000 UTC m=+0.052901432 container create b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:40:24 compute-0 systemd[1]: Started libpod-conmon-b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a.scope.
Jan 26 09:40:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:24 compute-0 podman[83930]: 2026-01-26 09:40:24.789675151 +0000 UTC m=+0.029189574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:24 compute-0 podman[83930]: 2026-01-26 09:40:24.897981318 +0000 UTC m=+0.137495711 container init b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:40:24 compute-0 podman[83930]: 2026-01-26 09:40:24.905093147 +0000 UTC m=+0.144607580 container start b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 09:40:24 compute-0 charming_lichterman[83946]: 167 167
Jan 26 09:40:24 compute-0 systemd[1]: libpod-b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a.scope: Deactivated successfully.
Jan 26 09:40:24 compute-0 podman[83930]: 2026-01-26 09:40:24.921736227 +0000 UTC m=+0.161250640 container attach b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:40:24 compute-0 podman[83930]: 2026-01-26 09:40:24.922378175 +0000 UTC m=+0.161892608 container died b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 09:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d88ea32dd7778220c74eb3367c3ec8402a689cfa02134703a26455affe9fcc3-merged.mount: Deactivated successfully.
Jan 26 09:40:25 compute-0 podman[83930]: 2026-01-26 09:40:25.028564427 +0000 UTC m=+0.268078860 container remove b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_lichterman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:40:25 compute-0 systemd[1]: libpod-conmon-b8020c6fa21604effd76c2d359dc7f4ff5e73e3cda267cbec681d354cde6ad1a.scope: Deactivated successfully.
Jan 26 09:40:25 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3688054713; not ready for session (expect reconnect)
Jan 26 09:40:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:25 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:25 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:25 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3944242284; not ready for session (expect reconnect)
Jan 26 09:40:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:25 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:25 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:25 compute-0 podman[83972]: 2026-01-26 09:40:25.181535596 +0000 UTC m=+0.040188705 container create 36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:40:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:25 compute-0 systemd[1]: Started libpod-conmon-36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd.scope.
Jan 26 09:40:25 compute-0 podman[83972]: 2026-01-26 09:40:25.164852414 +0000 UTC m=+0.023505533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:40:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b152249693769d831024cf379f7856d94566671fd627c44ce72e1d080436a4d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b152249693769d831024cf379f7856d94566671fd627c44ce72e1d080436a4d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b152249693769d831024cf379f7856d94566671fd627c44ce72e1d080436a4d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b152249693769d831024cf379f7856d94566671fd627c44ce72e1d080436a4d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:25 compute-0 podman[83972]: 2026-01-26 09:40:25.29839371 +0000 UTC m=+0.157046879 container init 36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:40:25 compute-0 podman[83972]: 2026-01-26 09:40:25.311329913 +0000 UTC m=+0.169982992 container start 36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pasteur, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:40:25 compute-0 podman[83972]: 2026-01-26 09:40:25.325362675 +0000 UTC m=+0.184015794 container attach 36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:40:25 compute-0 ceph-mon[74456]: purged_snaps scrub starts
Jan 26 09:40:25 compute-0 ceph-mon[74456]: purged_snaps scrub ok
Jan 26 09:40:25 compute-0 ceph-mon[74456]: purged_snaps scrub starts
Jan 26 09:40:25 compute-0 ceph-mon[74456]: purged_snaps scrub ok
Jan 26 09:40:25 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:25 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:25 compute-0 sudo[84031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmxtbmtseepcmmzjijdmgukdmmqcjes ; /usr/bin/python3'
Jan 26 09:40:25 compute-0 sudo[84031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:40:25 compute-0 python3[84034]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]: [
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:     {
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "available": false,
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "being_replaced": false,
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "ceph_device_lvm": false,
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "lsm_data": {},
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "lvs": [],
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "path": "/dev/sr0",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "rejected_reasons": [
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "Insufficient space (<5GB)",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "Has a FileSystem"
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         ],
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         "sys_api": {
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "actuators": null,
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "device_nodes": [
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:                 "sr0"
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             ],
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "devname": "sr0",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "human_readable_size": "482.00 KB",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "id_bus": "ata",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "model": "QEMU DVD-ROM",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "nr_requests": "2",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "parent": "/dev/sr0",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "partitions": {},
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "path": "/dev/sr0",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "removable": "1",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "rev": "2.5+",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "ro": "0",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "rotational": "1",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "sas_address": "",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "sas_device_handle": "",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "scheduler_mode": "mq-deadline",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "sectors": 0,
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "sectorsize": "2048",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "size": 493568.0,
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "support_discard": "2048",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "type": "disk",
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:             "vendor": "QEMU"
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:         }
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]:     }
Jan 26 09:40:25 compute-0 interesting_pasteur[83988]: ]
Jan 26 09:40:25 compute-0 podman[84768]: 2026-01-26 09:40:25.934919263 +0000 UTC m=+0.043612315 container create 76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd (image=quay.io/ceph/ceph:v19, name=loving_ganguly, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:40:25 compute-0 systemd[1]: libpod-36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd.scope: Deactivated successfully.
Jan 26 09:40:25 compute-0 podman[83972]: 2026-01-26 09:40:25.954267086 +0000 UTC m=+0.812920165 container died 36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 09:40:25 compute-0 systemd[1]: Started libpod-conmon-76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd.scope.
Jan 26 09:40:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f485ac6d3329a69a195fc6815b6e31f5655ef8752c650062a36c182628ca4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f485ac6d3329a69a195fc6815b6e31f5655ef8752c650062a36c182628ca4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f485ac6d3329a69a195fc6815b6e31f5655ef8752c650062a36c182628ca4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b152249693769d831024cf379f7856d94566671fd627c44ce72e1d080436a4d4-merged.mount: Deactivated successfully.
Jan 26 09:40:26 compute-0 podman[84768]: 2026-01-26 09:40:25.918342174 +0000 UTC m=+0.027035246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:40:26 compute-0 podman[84768]: 2026-01-26 09:40:26.035591019 +0000 UTC m=+0.144284091 container init 76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd (image=quay.io/ceph/ceph:v19, name=loving_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:26 compute-0 podman[84768]: 2026-01-26 09:40:26.044017002 +0000 UTC m=+0.152710054 container start 76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd (image=quay.io/ceph/ceph:v19, name=loving_ganguly, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:26 compute-0 podman[83972]: 2026-01-26 09:40:26.095054733 +0000 UTC m=+0.953707812 container remove 36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:40:26 compute-0 systemd[1]: libpod-conmon-36c682ea4692b6cbed04e8471d5820a5424a18aceae3fe447c98691775c7cafd.scope: Deactivated successfully.
Jan 26 09:40:26 compute-0 podman[84768]: 2026-01-26 09:40:26.101614937 +0000 UTC m=+0.210307989 container attach 76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd (image=quay.io/ceph/ceph:v19, name=loving_ganguly, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:26 compute-0 sudo[83868]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3688054713; not ready for session (expect reconnect)
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3944242284; not ready for session (expect reconnect)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 26 09:40:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738555933' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:40:26 compute-0 loving_ganguly[84997]: 
Jan 26 09:40:26 compute-0 loving_ganguly[84997]: {"fsid":"1a70b85d-e3fd-5814-8a6a-37ea00fcae30","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":119,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769420409,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-26T09:38:21:975599+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-26T09:39:48.684414+0000","services":{}},"progress_events":{}}
Jan 26 09:40:26 compute-0 systemd[1]: libpod-76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd.scope: Deactivated successfully.
Jan 26 09:40:26 compute-0 podman[85028]: 2026-01-26 09:40:26.523306962 +0000 UTC m=+0.025964928 container died 76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd (image=quay.io/ceph/ceph:v19, name=loving_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:40:26 compute-0 ceph-mon[74456]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3738555933' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0f485ac6d3329a69a195fc6815b6e31f5655ef8752c650062a36c182628ca4-merged.mount: Deactivated successfully.
Jan 26 09:40:26 compute-0 podman[85028]: 2026-01-26 09:40:26.650922431 +0000 UTC m=+0.153580317 container remove 76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd (image=quay.io/ceph/ceph:v19, name=loving_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:40:26 compute-0 systemd[1]: libpod-conmon-76efb37209a6f194948be26f17d8e8f6703c300842e67f44e50564402ec440dd.scope: Deactivated successfully.
Jan 26 09:40:26 compute-0 sudo[84031]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:27 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3688054713; not ready for session (expect reconnect)
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:27 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:27 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 09:40:27 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3944242284; not ready for session (expect reconnect)
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:27 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:27 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.573 iops: 7058.578 elapsed_sec: 0.425
Jan 26 09:40:27 compute-0 ceph-osd[82841]: log_channel(cluster) log [WRN] : OSD bench result of 7058.577675 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 0 waiting for initial osdmap
Jan 26 09:40:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0[82837]: 2026-01-26T09:40:27.393+0000 7f9644fee640 -1 osd.0 0 waiting for initial osdmap
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 7 set_numa_affinity not setting numa affinity
Jan 26 09:40:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-osd-0[82837]: 2026-01-26T09:40:27.411+0000 7f9640616640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Jan 26 09:40:27 compute-0 ceph-mon[74456]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 09:40:27 compute-0 ceph-mon[74456]: Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:40:27 compute-0 ceph-mon[74456]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 26 09:40:27 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:27 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:27 compute-0 ceph-osd[82841]: osd.0 8 state: booting -> active
Jan 26 09:40:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713] boot
Jan 26 09:40:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:40:27 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:27 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:27 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:28 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3944242284; not ready for session (expect reconnect)
Jan 26 09:40:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:28 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:28 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 09:40:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 26 09:40:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:40:28 compute-0 ceph-mon[74456]: pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 09:40:28 compute-0 ceph-mon[74456]: OSD bench result of 7058.577675 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 09:40:28 compute-0 ceph-mon[74456]: osd.0 [v2:192.168.122.100:6802/3688054713,v1:192.168.122.100:6803/3688054713] boot
Jan 26 09:40:28 compute-0 ceph-mon[74456]: osdmap e8: 2 total, 1 up, 2 in
Jan 26 09:40:28 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:40:28 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:28 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Jan 26 09:40:28 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284] boot
Jan 26 09:40:28 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Jan 26 09:40:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:40:28 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:28 compute-0 ceph-mgr[74755]: [devicehealth INFO root] creating mgr pool
Jan 26 09:40:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 26 09:40:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 26 09:40:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:40:30 compute-0 ceph-mon[74456]: OSD bench result of 7041.512344 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 09:40:30 compute-0 ceph-mon[74456]: osd.1 [v2:192.168.122.101:6800/3944242284,v1:192.168.122.101:6801/3944242284] boot
Jan 26 09:40:30 compute-0 ceph-mon[74456]: osdmap e9: 2 total, 2 up, 2 in
Jan 26 09:40:30 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:40:30 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 26 09:40:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 26 09:40:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Jan 26 09:40:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 26 09:40:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 26 09:40:31 compute-0 ceph-mon[74456]: pgmap v53: 0 pgs: ; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Jan 26 09:40:31 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 26 09:40:31 compute-0 ceph-mon[74456]: osdmap e10: 2 total, 2 up, 2 in
Jan 26 09:40:31 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 26 09:40:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 unknown; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Jan 26 09:40:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 26 09:40:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 26 09:40:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 26 09:40:31 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 26 09:40:31 compute-0 ceph-mgr[74755]: [devicehealth INFO root] creating main.db for devicehealth
Jan 26 09:40:31 compute-0 ceph-osd[82841]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 26 09:40:31 compute-0 ceph-osd[82841]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 26 09:40:31 compute-0 ceph-osd[82841]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 26 09:40:31 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 09:40:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 26 09:40:31 compute-0 sudo[85057]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 26 09:40:31 compute-0 sudo[85057]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 26 09:40:31 compute-0 sudo[85057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 26 09:40:31 compute-0 sudo[85057]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 26 09:40:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:40:31 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:40:32 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:40:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 26 09:40:32 compute-0 ceph-mon[74456]: pgmap v55: 1 pgs: 1 unknown; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Jan 26 09:40:32 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 26 09:40:32 compute-0 ceph-mon[74456]: osdmap e11: 2 total, 2 up, 2 in
Jan 26 09:40:32 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 26 09:40:32 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 26 09:40:32 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:40:32 compute-0 ceph-mon[74456]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:40:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 26 09:40:32 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 26 09:40:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 26 09:40:33 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 09:40:33 compute-0 ceph-mon[74456]: osdmap e12: 2 total, 2 up, 2 in
Jan 26 09:40:33 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.zllcia(active, since 107s)
Jan 26 09:40:34 compute-0 ceph-mon[74456]: pgmap v58: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 26 09:40:34 compute-0 ceph-mon[74456]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 09:40:34 compute-0 ceph-mon[74456]: mgrmap e9: compute-0.zllcia(active, since 107s)
Jan 26 09:40:34 compute-0 sshd-session[85060]: Invalid user admin from 157.245.76.178 port 57892
Jan 26 09:40:34 compute-0 sshd-session[85060]: Connection closed by invalid user admin 157.245.76.178 port 57892 [preauth]
Jan 26 09:40:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:35 compute-0 ceph-mon[74456]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:38 compute-0 ceph-mon[74456]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:40 compute-0 ceph-mon[74456]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:42 compute-0 ceph-mon[74456]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:44 compute-0 ceph-mon[74456]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:46 compute-0 ceph-mon[74456]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:40:46
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr']
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:40:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:40:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:48 compute-0 ceph-mon[74456]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:50 compute-0 ceph-mon[74456]: pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:52 compute-0 ceph-mon[74456]: pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:40:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:40:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:40:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:40:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 09:40:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:52 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:40:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:40:52 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:40:52 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:40:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:53 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:40:53 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:40:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:40:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:40:53 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:40:53 compute-0 ceph-mon[74456]: pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:53 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:40:54 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:40:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:40:54 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:40:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:40:54 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:40:54 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:40:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:40:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:55 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 25f58fd5-63b6-4cc7-aae9-9212ea579be0 (Updating mon deployment (+2 -> 3))
Jan 26 09:40:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:40:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:40:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:55 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 26 09:40:55 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 26 09:40:55 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 09:40:56 compute-0 ceph-mon[74456]: pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:56 compute-0 ceph-mon[74456]: pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:40:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:40:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:40:56 compute-0 ceph-mon[74456]: Deploying daemon mon.compute-2 on compute-2
Jan 26 09:40:56 compute-0 ceph-mon[74456]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 26 09:40:56 compute-0 ceph-mon[74456]: Cluster is now healthy
Jan 26 09:40:56 compute-0 sudo[85085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acslngdiebfiqayrckgnhundpsnpvlgs ; /usr/bin/python3'
Jan 26 09:40:56 compute-0 sudo[85085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:40:56 compute-0 python3[85087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:40:57 compute-0 podman[85089]: 2026-01-26 09:40:57.01655485 +0000 UTC m=+0.064806696 container create 8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4 (image=quay.io/ceph/ceph:v19, name=clever_elbakyan, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 26 09:40:57 compute-0 systemd[75791]: Starting Mark boot as successful...
Jan 26 09:40:57 compute-0 systemd[75791]: Finished Mark boot as successful.
Jan 26 09:40:57 compute-0 podman[85089]: 2026-01-26 09:40:56.98859044 +0000 UTC m=+0.036842316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:40:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:40:57 compute-0 systemd[1]: Started libpod-conmon-8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4.scope.
Jan 26 09:40:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54932248b87f322e9edd991656c9634fcb67e29cb4e56f8e3341e8391492e0e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54932248b87f322e9edd991656c9634fcb67e29cb4e56f8e3341e8391492e0e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54932248b87f322e9edd991656c9634fcb67e29cb4e56f8e3341e8391492e0e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:57 compute-0 podman[85089]: 2026-01-26 09:40:57.38459168 +0000 UTC m=+0.432843576 container init 8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4 (image=quay.io/ceph/ceph:v19, name=clever_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:40:57 compute-0 podman[85089]: 2026-01-26 09:40:57.396988549 +0000 UTC m=+0.445240435 container start 8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4 (image=quay.io/ceph/ceph:v19, name=clever_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:40:57 compute-0 podman[85089]: 2026-01-26 09:40:57.468737157 +0000 UTC m=+0.516989013 container attach 8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4 (image=quay.io/ceph/ceph:v19, name=clever_elbakyan, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:40:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 26 09:40:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077810228' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:40:57 compute-0 clever_elbakyan[85107]: 
Jan 26 09:40:57 compute-0 clever_elbakyan[85107]: {"fsid":"1a70b85d-e3fd-5814-8a6a-37ea00fcae30","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":150,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1769420428,"num_in_osds":2,"osd_in_since":1769420409,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55783424,"bytes_avail":42885500928,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2026-01-26T09:38:21:975599+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-26T09:39:48.684414+0000","services":{}},"progress_events":{"25f58fd5-63b6-4cc7-aae9-9212ea579be0":{"message":"Updating mon deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 26 09:40:57 compute-0 systemd[1]: libpod-8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4.scope: Deactivated successfully.
Jan 26 09:40:57 compute-0 podman[85089]: 2026-01-26 09:40:57.85162684 +0000 UTC m=+0.899878686 container died 8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4 (image=quay.io/ceph/ceph:v19, name=clever_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:40:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-54932248b87f322e9edd991656c9634fcb67e29cb4e56f8e3341e8391492e0e4-merged.mount: Deactivated successfully.
Jan 26 09:40:58 compute-0 podman[85089]: 2026-01-26 09:40:58.148435265 +0000 UTC m=+1.196687151 container remove 8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4 (image=quay.io/ceph/ceph:v19, name=clever_elbakyan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 09:40:58 compute-0 sudo[85085]: pam_unix(sudo:session): session closed for user root
Jan 26 09:40:58 compute-0 systemd[1]: libpod-conmon-8bb05b85df622b17648ec4b1bf689a0a353ee41e451cecd2591ac86ca90da1e4.scope: Deactivated successfully.
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:40:58 compute-0 sudo[85167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lidowmoehtexlentjdgbfquqhrrqwlct ; /usr/bin/python3'
Jan 26 09:40:58 compute-0 sudo[85167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 26 09:40:58 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/949511774; not ready for session (expect reconnect)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:40:58 compute-0 python3[85169]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:40:58 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:40:58 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 09:40:58 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 26 09:40:58 compute-0 ceph-mon[74456]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 26 09:40:58 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:40:58 compute-0 podman[85170]: 2026-01-26 09:40:58.63416552 +0000 UTC m=+0.024656683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:40:58 compute-0 podman[85170]: 2026-01-26 09:40:58.760391631 +0000 UTC m=+0.150882814 container create a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f (image=quay.io/ceph/ceph:v19, name=elastic_jones, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:40:58 compute-0 systemd[1]: Started libpod-conmon-a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f.scope.
Jan 26 09:40:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c138dd3847ebb97057587681bd4deb1e1ed23ed5b28c9c2a76fa5e3c103ea2bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c138dd3847ebb97057587681bd4deb1e1ed23ed5b28c9c2a76fa5e3c103ea2bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:40:59 compute-0 podman[85170]: 2026-01-26 09:40:59.041925521 +0000 UTC m=+0.432416654 container init a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f (image=quay.io/ceph/ceph:v19, name=elastic_jones, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:40:59 compute-0 podman[85170]: 2026-01-26 09:40:59.051140945 +0000 UTC m=+0.441632088 container start a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f (image=quay.io/ceph/ceph:v19, name=elastic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 09:40:59 compute-0 podman[85170]: 2026-01-26 09:40:59.115893559 +0000 UTC m=+0.506384702 container attach a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f (image=quay.io/ceph/ceph:v19, name=elastic_jones, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:40:59 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:40:59 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:40:59 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/949511774; not ready for session (expect reconnect)
Jan 26 09:40:59 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:40:59 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:40:59 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 09:40:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:40:59 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:41:00 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/949511774; not ready for session (expect reconnect)
Jan 26 09:41:00 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:41:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:00 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 09:41:00 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:41:01 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/949511774; not ready for session (expect reconnect)
Jan 26 09:41:01 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:41:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:01 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 09:41:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:02 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:41:02 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:41:02 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/949511774; not ready for session (expect reconnect)
Jan 26 09:41:02 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:41:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:02 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 09:41:02 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:41:03 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/949511774; not ready for session (expect reconnect)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 26 09:41:03 compute-0 ceph-mon[74456]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 26 09:41:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : monmap epoch 2
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : last_changed 2026-01-26T09:40:58.536157+0000
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : created 2026-01-26T09:38:19.068625+0000
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.zllcia(active, since 2m)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 26 09:41:03 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-0 calling monitor election
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-2 calling monitor election
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: monmap epoch 2
Jan 26 09:41:03 compute-0 ceph-mon[74456]: fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:03 compute-0 ceph-mon[74456]: last_changed 2026-01-26T09:40:58.536157+0000
Jan 26 09:41:03 compute-0 ceph-mon[74456]: created 2026-01-26T09:38:19.068625+0000
Jan 26 09:41:03 compute-0 ceph-mon[74456]: min_mon_release 19 (squid)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: election_strategy: 1
Jan 26 09:41:03 compute-0 ceph-mon[74456]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:41:03 compute-0 ceph-mon[74456]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 26 09:41:03 compute-0 ceph-mon[74456]: fsmap 
Jan 26 09:41:03 compute-0 ceph-mon[74456]: osdmap e12: 2 total, 2 up, 2 in
Jan 26 09:41:03 compute-0 ceph-mon[74456]: mgrmap e9: compute-0.zllcia(active, since 2m)
Jan 26 09:41:03 compute-0 ceph-mon[74456]: overall HEALTH_OK
Jan 26 09:41:03 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:03 compute-0 ceph-mon[74456]: Deploying daemon mon.compute-1 on compute-1
Jan 26 09:41:04 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/949511774; not ready for session (expect reconnect)
Jan 26 09:41:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:41:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:04 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3981712437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 26 09:41:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:41:05.541+0000 7ff4ce836640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 25f58fd5-63b6-4cc7-aae9-9212ea579be0 (Updating mon deployment (+2 -> 3))
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 25f58fd5-63b6-4cc7-aae9-9212ea579be0 (Updating mon deployment (+2 -> 3)) in 10 seconds
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev bbc2e22d-6000-4d2c-8675-7dea0fc960cb (Updating mgr deployment (+2 -> 3))
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.oynaeu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oynaeu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.oynaeu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3035468071; not ready for session (expect reconnect)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.oynaeu on compute-2
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.oynaeu on compute-2
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3981712437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 26 09:41:05 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 09:41:05 compute-0 ceph-mon[74456]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:41:05 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:41:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:06 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3035468071; not ready for session (expect reconnect)
Jan 26 09:41:06 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:06 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:06 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 09:41:06 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 3 completed events
Jan 26 09:41:06 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:41:07 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 09:41:07 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:41:07 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 09:41:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:07 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3035468071; not ready for session (expect reconnect)
Jan 26 09:41:07 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:07 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:07 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 09:41:07 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 09:41:08 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3035468071; not ready for session (expect reconnect)
Jan 26 09:41:08 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:08 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 09:41:08 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 09:41:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:09 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3035468071; not ready for session (expect reconnect)
Jan 26 09:41:09 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:09 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 09:41:10 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 09:41:10 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 09:41:10 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3035468071; not ready for session (expect reconnect)
Jan 26 09:41:10 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:10 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 09:41:10 compute-0 ceph-mon[74456]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 26 09:41:10 compute-0 ceph-mon[74456]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : last_changed 2026-01-26T09:41:05.675064+0000
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : created 2026-01-26T09:38:19.068625+0000
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 26 09:41:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.zllcia(active, since 2m)
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 09:41:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 26 09:41:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3981712437' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 26 09:41:11 compute-0 elastic_jones[85185]: pool 'vms' created
Jan 26 09:41:11 compute-0 systemd[1]: libpod-a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f.scope: Deactivated successfully.
Jan 26 09:41:11 compute-0 podman[85170]: 2026-01-26 09:41:11.094897097 +0000 UTC m=+12.485388250 container died a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f (image=quay.io/ceph/ceph:v19, name=elastic_jones, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:11 compute-0 ceph-mon[74456]: Deploying daemon mgr.compute-2.oynaeu on compute-2
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3981712437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0 calling monitor election
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-2 calling monitor election
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-1 calling monitor election
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: monmap epoch 3
Jan 26 09:41:11 compute-0 ceph-mon[74456]: fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:11 compute-0 ceph-mon[74456]: last_changed 2026-01-26T09:41:05.675064+0000
Jan 26 09:41:11 compute-0 ceph-mon[74456]: created 2026-01-26T09:38:19.068625+0000
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c138dd3847ebb97057587681bd4deb1e1ed23ed5b28c9c2a76fa5e3c103ea2bb-merged.mount: Deactivated successfully.
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.xammti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.xammti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 09:41:11 compute-0 podman[85170]: 2026-01-26 09:41:11.232267882 +0000 UTC m=+12.622759025 container remove a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f (image=quay.io/ceph/ceph:v19, name=elastic_jones, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.xammti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.xammti on compute-1
Jan 26 09:41:11 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.xammti on compute-1
Jan 26 09:41:11 compute-0 systemd[1]: libpod-conmon-a9b30398f9045006ac4aecc2e24db56158b22544120d7097dee2963134895a3f.scope: Deactivated successfully.
Jan 26 09:41:11 compute-0 sudo[85167]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:11 compute-0 sudo[85250]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkhelnfjlzrtfdvphyfbepxfwtwxfvyt ; /usr/bin/python3'
Jan 26 09:41:11 compute-0 sudo[85250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:11 compute-0 python3[85252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:11 compute-0 podman[85253]: 2026-01-26 09:41:11.53180594 +0000 UTC m=+0.037110194 container create 770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d (image=quay.io/ceph/ceph:v19, name=objective_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:11 compute-0 systemd[1]: Started libpod-conmon-770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d.scope.
Jan 26 09:41:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512a7b0b8a5dd0ecf25ba526c2553dee3f93e3f348c8f0adfcbea1f24cc45ce2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512a7b0b8a5dd0ecf25ba526c2553dee3f93e3f348c8f0adfcbea1f24cc45ce2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:11 compute-0 podman[85253]: 2026-01-26 09:41:11.603562308 +0000 UTC m=+0.108866592 container init 770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d (image=quay.io/ceph/ceph:v19, name=objective_curran, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:11 compute-0 podman[85253]: 2026-01-26 09:41:11.609782624 +0000 UTC m=+0.115086878 container start 770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d (image=quay.io/ceph/ceph:v19, name=objective_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:11 compute-0 podman[85253]: 2026-01-26 09:41:11.515458007 +0000 UTC m=+0.020762281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:11 compute-0 podman[85253]: 2026-01-26 09:41:11.614381415 +0000 UTC m=+0.119685709 container attach 770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d (image=quay.io/ceph/ceph:v19, name=objective_curran, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:41:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v79: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:11 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3035468071; not ready for session (expect reconnect)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 26 09:41:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3023141661' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 26 09:41:12 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:12 compute-0 ceph-mon[74456]: min_mon_release 19 (squid)
Jan 26 09:41:12 compute-0 ceph-mon[74456]: election_strategy: 1
Jan 26 09:41:12 compute-0 ceph-mon[74456]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 26 09:41:12 compute-0 ceph-mon[74456]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 26 09:41:12 compute-0 ceph-mon[74456]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 26 09:41:12 compute-0 ceph-mon[74456]: fsmap 
Jan 26 09:41:12 compute-0 ceph-mon[74456]: osdmap e12: 2 total, 2 up, 2 in
Jan 26 09:41:12 compute-0 ceph-mon[74456]: mgrmap e9: compute-0.zllcia(active, since 2m)
Jan 26 09:41:12 compute-0 ceph-mon[74456]: overall HEALTH_OK
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3981712437' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:12 compute-0 ceph-mon[74456]: osdmap e13: 2 total, 2 up, 2 in
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.xammti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.xammti", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:12 compute-0 ceph-mon[74456]: Deploying daemon mgr.compute-1.xammti on compute-1
Jan 26 09:41:12 compute-0 ceph-mon[74456]: pgmap v79: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:41:12 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3023141661' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3023141661' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 26 09:41:12 compute-0 objective_curran[85268]: pool 'volumes' created
Jan 26 09:41:12 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 26 09:41:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:12 compute-0 systemd[1]: libpod-770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d.scope: Deactivated successfully.
Jan 26 09:41:12 compute-0 podman[85253]: 2026-01-26 09:41:12.529403671 +0000 UTC m=+1.034707925 container died 770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d (image=quay.io/ceph/ceph:v19, name=objective_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-512a7b0b8a5dd0ecf25ba526c2553dee3f93e3f348c8f0adfcbea1f24cc45ce2-merged.mount: Deactivated successfully.
Jan 26 09:41:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:41:12.678+0000 7ff4ce836640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 26 09:41:12 compute-0 ceph-mgr[74755]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 26 09:41:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:41:12 compute-0 podman[85253]: 2026-01-26 09:41:12.863083342 +0000 UTC m=+1.368387596 container remove 770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d (image=quay.io/ceph/ceph:v19, name=objective_curran, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 09:41:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:41:12 compute-0 sudo[85250]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:12 compute-0 systemd[1]: libpod-conmon-770bad091ab63d1867e39ca59f3a1d27d70cff607106bfb8548970f27913913d.scope: Deactivated successfully.
Jan 26 09:41:13 compute-0 sudo[85332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sevikauwmkezpgulcgafuscokwptjtkq ; /usr/bin/python3'
Jan 26 09:41:13 compute-0 sudo[85332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 26 09:41:13 compute-0 python3[85334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:13 compute-0 podman[85335]: 2026-01-26 09:41:13.179975319 +0000 UTC m=+0.020582387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:13 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev bbc2e22d-6000-4d2c-8675-7dea0fc960cb (Updating mgr deployment (+2 -> 3))
Jan 26 09:41:13 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event bbc2e22d-6000-4d2c-8675-7dea0fc960cb (Updating mgr deployment (+2 -> 3)) in 8 seconds
Jan 26 09:41:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 26 09:41:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 26 09:41:13 compute-0 podman[85335]: 2026-01-26 09:41:13.56697062 +0000 UTC m=+0.407577668 container create 7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc (image=quay.io/ceph/ceph:v19, name=gifted_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:13 compute-0 ceph-mon[74456]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:13 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3023141661' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:13 compute-0 ceph-mon[74456]: osdmap e14: 2 total, 2 up, 2 in
Jan 26 09:41:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:13 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 26 09:41:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:13 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 26 09:41:13 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 50b80179-64eb-4754-b9f2-a51788facc85 (Updating crash deployment (+1 -> 3))
Jan 26 09:41:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 26 09:41:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:41:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v82: 3 pgs: 1 creating+peering, 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:13 compute-0 systemd[1]: Started libpod-conmon-7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc.scope.
Jan 26 09:41:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa09c9eafee364283df82a478e725d5439863cef2c55ef9898b814c9a5cdb0d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa09c9eafee364283df82a478e725d5439863cef2c55ef9898b814c9a5cdb0d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 09:41:13 compute-0 podman[85335]: 2026-01-26 09:41:13.719340323 +0000 UTC m=+0.559947381 container init 7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc (image=quay.io/ceph/ceph:v19, name=gifted_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:13 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:13 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:13 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 26 09:41:13 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 26 09:41:13 compute-0 podman[85335]: 2026-01-26 09:41:13.727005456 +0000 UTC m=+0.567612504 container start 7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc (image=quay.io/ceph/ceph:v19, name=gifted_haslett, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:13 compute-0 podman[85335]: 2026-01-26 09:41:13.738804248 +0000 UTC m=+0.579411326 container attach 7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc (image=quay.io/ceph/ceph:v19, name=gifted_haslett, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 09:41:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 26 09:41:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1199163324' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:14 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:14 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:14 compute-0 ceph-mon[74456]: osdmap e15: 2 total, 2 up, 2 in
Jan 26 09:41:14 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:41:14 compute-0 ceph-mon[74456]: pgmap v82: 3 pgs: 1 creating+peering, 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:14 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 09:41:14 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:14 compute-0 ceph-mon[74456]: Deploying daemon crash.compute-2 on compute-2
Jan 26 09:41:14 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1199163324' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 26 09:41:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1199163324' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 26 09:41:14 compute-0 gifted_haslett[85350]: pool 'backups' created
Jan 26 09:41:14 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 26 09:41:14 compute-0 systemd[1]: libpod-7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc.scope: Deactivated successfully.
Jan 26 09:41:14 compute-0 podman[85335]: 2026-01-26 09:41:14.849715859 +0000 UTC m=+1.690322907 container died 7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc (image=quay.io/ceph/ceph:v19, name=gifted_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 09:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa09c9eafee364283df82a478e725d5439863cef2c55ef9898b814c9a5cdb0d4-merged.mount: Deactivated successfully.
Jan 26 09:41:15 compute-0 podman[85335]: 2026-01-26 09:41:15.185443414 +0000 UTC m=+2.026050462 container remove 7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc (image=quay.io/ceph/ceph:v19, name=gifted_haslett, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:41:15 compute-0 sudo[85332]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:15 compute-0 systemd[1]: libpod-conmon-7a59d84311e9595ffd0494e0e7ef3c30c0139b162950c74f7b13355b56097cfc.scope: Deactivated successfully.
Jan 26 09:41:15 compute-0 sudo[85413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haxzfxzazjpcrwgrynvbejxnaebvrpor ; /usr/bin/python3'
Jan 26 09:41:15 compute-0 sudo[85413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:41:15 compute-0 python3[85415]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:15 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:15 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 50b80179-64eb-4754-b9f2-a51788facc85 (Updating crash deployment (+1 -> 3))
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 26 09:41:15 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 50b80179-64eb-4754-b9f2-a51788facc85 (Updating crash deployment (+1 -> 3)) in 2 seconds
Jan 26 09:41:15 compute-0 podman[85416]: 2026-01-26 09:41:15.547991288 +0000 UTC m=+0.048946456 container create e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0 (image=quay.io/ceph/ceph:v19, name=vigilant_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:15 compute-0 sudo[85429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:15 compute-0 podman[85416]: 2026-01-26 09:41:15.532985342 +0000 UTC m=+0.033940530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:15 compute-0 sudo[85429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:15 compute-0 sudo[85429]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v84: 4 pgs: 1 unknown, 2 active+clean, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:15 compute-0 sudo[85454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:41:15 compute-0 sudo[85454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:15 compute-0 systemd[1]: Started libpod-conmon-e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0.scope.
Jan 26 09:41:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2fd1588631c22faff7c02f50ac5ab87e4d52cf0f6804bd4b423e09e19565a20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2fd1588631c22faff7c02f50ac5ab87e4d52cf0f6804bd4b423e09e19565a20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:15 compute-0 podman[85416]: 2026-01-26 09:41:15.810029963 +0000 UTC m=+0.310985161 container init e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0 (image=quay.io/ceph/ceph:v19, name=vigilant_brattain, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:15 compute-0 podman[85416]: 2026-01-26 09:41:15.816515505 +0000 UTC m=+0.317470683 container start e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0 (image=quay.io/ceph/ceph:v19, name=vigilant_brattain, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:15 compute-0 podman[85416]: 2026-01-26 09:41:15.820365687 +0000 UTC m=+0.321320855 container attach e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0 (image=quay.io/ceph/ceph:v19, name=vigilant_brattain, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 26 09:41:15 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 5 completed events
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:41:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 26 09:41:16 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 26 09:41:16 compute-0 sshd-session[85504]: Invalid user  from 129.212.186.155 port 36760
Jan 26 09:41:16 compute-0 podman[85547]: 2026-01-26 09:41:16.058607792 +0000 UTC m=+0.063756429 container create ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_euler, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1199163324' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:16 compute-0 ceph-mon[74456]: osdmap e16: 2 total, 2 up, 2 in
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:41:16 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:16 compute-0 ceph-mon[74456]: pgmap v84: 4 pgs: 1 unknown, 2 active+clean, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:16 compute-0 podman[85547]: 2026-01-26 09:41:16.013270603 +0000 UTC m=+0.018419260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:16 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 26 09:41:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1746553743' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:41:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:41:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:41:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:41:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:41:16 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:41:16 compute-0 systemd[1]: Started libpod-conmon-ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67.scope.
Jan 26 09:41:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 26 09:41:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "cdaf6859-268c-4a38-b792-ad916b17c334"} v 0)
Jan 26 09:41:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "cdaf6859-268c-4a38-b792-ad916b17c334"}]: dispatch
Jan 26 09:41:17 compute-0 podman[85547]: 2026-01-26 09:41:17.501003896 +0000 UTC m=+1.506152633 container init ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_euler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:41:17 compute-0 podman[85547]: 2026-01-26 09:41:17.511584356 +0000 UTC m=+1.516733023 container start ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_euler, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:17 compute-0 pensive_euler[85567]: 167 167
Jan 26 09:41:17 compute-0 systemd[1]: libpod-ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67.scope: Deactivated successfully.
Jan 26 09:41:17 compute-0 conmon[85567]: conmon ebebcb71f3f858b8e128 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67.scope/container/memory.events
Jan 26 09:41:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu started
Jan 26 09:41:17 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mgr.compute-2.oynaeu 192.168.122.102:0/2794833866; not ready for session (expect reconnect)
Jan 26 09:41:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v86: 4 pgs: 1 unknown, 2 active+clean, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1746553743' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 26 09:41:18 compute-0 vigilant_brattain[85481]: pool 'images' created
Jan 26 09:41:18 compute-0 podman[85547]: 2026-01-26 09:41:18.092582253 +0000 UTC m=+2.097730910 container attach ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:41:18 compute-0 ceph-mon[74456]: osdmap e17: 2 total, 2 up, 2 in
Jan 26 09:41:18 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1746553743' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:18 compute-0 podman[85547]: 2026-01-26 09:41:18.093467165 +0000 UTC m=+2.098615822 container died ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Jan 26 09:41:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 26 09:41:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e18 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 09:41:18 compute-0 systemd[1]: libpod-e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0.scope: Deactivated successfully.
Jan 26 09:41:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 26 09:41:18 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cdaf6859-268c-4a38-b792-ad916b17c334"}]': finished
Jan 26 09:41:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Jan 26 09:41:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Jan 26 09:41:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:18 compute-0 podman[85416]: 2026-01-26 09:41:18.128514073 +0000 UTC m=+2.629469261 container died e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0 (image=quay.io/ceph/ceph:v19, name=vigilant_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 09:41:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:18 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:18 compute-0 sshd-session[85570]: Invalid user admin from 157.245.76.178 port 57108
Jan 26 09:41:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2fd1588631c22faff7c02f50ac5ab87e4d52cf0f6804bd4b423e09e19565a20-merged.mount: Deactivated successfully.
Jan 26 09:41:18 compute-0 sshd-session[85570]: Connection closed by invalid user admin 157.245.76.178 port 57108 [preauth]
Jan 26 09:41:18 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mgr.compute-2.oynaeu 192.168.122.102:0/2794833866; not ready for session (expect reconnect)
Jan 26 09:41:18 compute-0 podman[85416]: 2026-01-26 09:41:18.750774641 +0000 UTC m=+3.251729809 container remove e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0 (image=quay.io/ceph/ceph:v19, name=vigilant_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:18 compute-0 systemd[1]: libpod-conmon-e0bc85725aad94a49bf006262c72280c36cd2a222080d9c06cddb20d981b17c0.scope: Deactivated successfully.
Jan 26 09:41:18 compute-0 sudo[85413]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.zllcia(active, since 2m), standbys: compute-2.oynaeu
Jan 26 09:41:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"} v 0)
Jan 26 09:41:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:41:18 compute-0 sudo[85624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afadogdwmqqewueuiwygrhbndprerxje ; /usr/bin/python3'
Jan 26 09:41:18 compute-0 sudo[85624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:19 compute-0 python3[85626]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecefbd2bbc3dc070b6fec413748abf8dfc8434afdff3291c532e74b088e28ea1-merged.mount: Deactivated successfully.
Jan 26 09:41:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 26 09:41:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v89: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:19 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:20 compute-0 podman[85547]: 2026-01-26 09:41:20.181237699 +0000 UTC m=+4.186386366 container remove ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_euler, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:20 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4005713193' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "cdaf6859-268c-4a38-b792-ad916b17c334"}]: dispatch
Jan 26 09:41:20 compute-0 ceph-mon[74456]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "cdaf6859-268c-4a38-b792-ad916b17c334"}]: dispatch
Jan 26 09:41:20 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu started
Jan 26 09:41:20 compute-0 ceph-mon[74456]: pgmap v86: 4 pgs: 1 unknown, 2 active+clean, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:20 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1746553743' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:20 compute-0 ceph-mon[74456]: osdmap e18: 2 total, 2 up, 2 in
Jan 26 09:41:20 compute-0 ceph-mon[74456]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cdaf6859-268c-4a38-b792-ad916b17c334"}]': finished
Jan 26 09:41:20 compute-0 ceph-mon[74456]: osdmap e19: 3 total, 2 up, 3 in
Jan 26 09:41:20 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:20 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3839283137' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 26 09:41:20 compute-0 ceph-mon[74456]: mgrmap e10: compute-0.zllcia(active, since 2m), standbys: compute-2.oynaeu
Jan 26 09:41:20 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:41:20 compute-0 podman[85627]: 2026-01-26 09:41:20.26854183 +0000 UTC m=+1.235074198 container create 2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9 (image=quay.io/ceph/ceph:v19, name=amazing_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:41:20 compute-0 podman[85627]: 2026-01-26 09:41:20.18844125 +0000 UTC m=+1.154973638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:20 compute-0 systemd[1]: Started libpod-conmon-2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9.scope.
Jan 26 09:41:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd27a0d35f203a91f148b2a73a46a08f57a2f3cf3c89d91eb76f98a129c3001/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dd27a0d35f203a91f148b2a73a46a08f57a2f3cf3c89d91eb76f98a129c3001/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:20 compute-0 podman[85648]: 2026-01-26 09:41:20.733240038 +0000 UTC m=+0.427383782 container create e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:20 compute-0 podman[85627]: 2026-01-26 09:41:20.743328395 +0000 UTC m=+1.709860783 container init 2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9 (image=quay.io/ceph/ceph:v19, name=amazing_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:41:20 compute-0 podman[85627]: 2026-01-26 09:41:20.749778115 +0000 UTC m=+1.716310483 container start 2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9 (image=quay.io/ceph/ceph:v19, name=amazing_vaughan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:20 compute-0 podman[85648]: 2026-01-26 09:41:20.667534329 +0000 UTC m=+0.361678093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:20 compute-0 systemd[1]: Started libpod-conmon-e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067.scope.
Jan 26 09:41:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:20 compute-0 podman[85627]: 2026-01-26 09:41:20.801800303 +0000 UTC m=+1.768332671 container attach 2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9 (image=quay.io/ceph/ceph:v19, name=amazing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555f922e4983a66ed7ff71c7117538e96cd2655d869008fd4594e38d7c3361c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555f922e4983a66ed7ff71c7117538e96cd2655d869008fd4594e38d7c3361c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555f922e4983a66ed7ff71c7117538e96cd2655d869008fd4594e38d7c3361c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555f922e4983a66ed7ff71c7117538e96cd2655d869008fd4594e38d7c3361c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555f922e4983a66ed7ff71c7117538e96cd2655d869008fd4594e38d7c3361c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:20 compute-0 podman[85648]: 2026-01-26 09:41:20.820903537 +0000 UTC m=+0.515047291 container init e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:20 compute-0 systemd[1]: libpod-conmon-ebebcb71f3f858b8e128443e7e74e15a5dbce4359da70c3fe8633673d0612f67.scope: Deactivated successfully.
Jan 26 09:41:20 compute-0 podman[85648]: 2026-01-26 09:41:20.831498238 +0000 UTC m=+0.525641982 container start e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:41:20 compute-0 podman[85648]: 2026-01-26 09:41:20.835184435 +0000 UTC m=+0.529328169 container attach e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:41:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 26 09:41:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/929823694' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:21 compute-0 exciting_brattain[85670]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:41:21 compute-0 exciting_brattain[85670]: --> All data devices are unavailable
Jan 26 09:41:21 compute-0 systemd[1]: libpod-e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067.scope: Deactivated successfully.
Jan 26 09:41:21 compute-0 podman[85648]: 2026-01-26 09:41:21.162809276 +0000 UTC m=+0.856953050 container died e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 09:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-555f922e4983a66ed7ff71c7117538e96cd2655d869008fd4594e38d7c3361c5-merged.mount: Deactivated successfully.
Jan 26 09:41:21 compute-0 podman[85648]: 2026-01-26 09:41:21.27251752 +0000 UTC m=+0.966661264 container remove e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:41:21 compute-0 systemd[1]: libpod-conmon-e393bb05ed3191988b6a5b5b1f414dcda98d96261850d7b7666d496b449ed067.scope: Deactivated successfully.
Jan 26 09:41:21 compute-0 sudo[85454]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:21 compute-0 sudo[85719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:21 compute-0 sudo[85719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:21 compute-0 sudo[85719]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:21 compute-0 sudo[85744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:41:21 compute-0 sudo[85744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Jan 26 09:41:21 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Jan 26 09:41:21 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=18/20 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v91: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:21 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:21 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:21 compute-0 podman[85809]: 2026-01-26 09:41:21.792788968 +0000 UTC m=+0.026893932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:21 compute-0 ceph-mon[74456]: pgmap v89: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:21 compute-0 ceph-mon[74456]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:21 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/929823694' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:21 compute-0 podman[85809]: 2026-01-26 09:41:21.966302061 +0000 UTC m=+0.200407015 container create 36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_bardeen, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:41:22 compute-0 systemd[1]: Started libpod-conmon-36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a.scope.
Jan 26 09:41:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 26 09:41:22 compute-0 podman[85809]: 2026-01-26 09:41:22.604087421 +0000 UTC m=+0.838192385 container init 36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 09:41:22 compute-0 podman[85809]: 2026-01-26 09:41:22.611389294 +0000 UTC m=+0.845494228 container start 36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:41:22 compute-0 recursing_bardeen[85825]: 167 167
Jan 26 09:41:22 compute-0 systemd[1]: libpod-36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a.scope: Deactivated successfully.
Jan 26 09:41:22 compute-0 podman[85809]: 2026-01-26 09:41:22.616259683 +0000 UTC m=+0.850364647 container attach 36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_bardeen, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:41:22 compute-0 podman[85809]: 2026-01-26 09:41:22.617366862 +0000 UTC m=+0.851471816 container died 36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 09:41:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/929823694' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Jan 26 09:41:22 compute-0 amazing_vaughan[85662]: pool 'cephfs.cephfs.meta' created
Jan 26 09:41:22 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Jan 26 09:41:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:22 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:22 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:22 compute-0 systemd[1]: libpod-2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9.scope: Deactivated successfully.
Jan 26 09:41:22 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec56b8a5c35cc34d2a12c4bae2ec362d7a3b111179c059762219ea7900e34491-merged.mount: Deactivated successfully.
Jan 26 09:41:22 compute-0 podman[85627]: 2026-01-26 09:41:22.728156833 +0000 UTC m=+3.694689201 container died 2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9 (image=quay.io/ceph/ceph:v19, name=amazing_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 09:41:22 compute-0 podman[85809]: 2026-01-26 09:41:22.838187556 +0000 UTC m=+1.072292520 container remove 36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dd27a0d35f203a91f148b2a73a46a08f57a2f3cf3c89d91eb76f98a129c3001-merged.mount: Deactivated successfully.
Jan 26 09:41:23 compute-0 ceph-mon[74456]: osdmap e20: 3 total, 2 up, 3 in
Jan 26 09:41:23 compute-0 ceph-mon[74456]: pgmap v91: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:23 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/929823694' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:23 compute-0 ceph-mon[74456]: osdmap e21: 3 total, 2 up, 3 in
Jan 26 09:41:23 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:23 compute-0 podman[85627]: 2026-01-26 09:41:23.464745589 +0000 UTC m=+4.431277957 container remove 2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9 (image=quay.io/ceph/ceph:v19, name=amazing_vaughan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Jan 26 09:41:23 compute-0 systemd[1]: libpod-conmon-2a04e10032272a1975108f3de97132968d08693fee8124b64fe353c240cf35f9.scope: Deactivated successfully.
Jan 26 09:41:23 compute-0 sudo[85624]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:23 compute-0 systemd[1]: libpod-conmon-36dbabc0f64da004d365a7211c14032ccbb79ca1c4b5e18e18b833b3b8655c8a.scope: Deactivated successfully.
Jan 26 09:41:23 compute-0 podman[85865]: 2026-01-26 09:41:23.512923603 +0000 UTC m=+0.552862323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 26 09:41:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v93: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:23 compute-0 sudo[85901]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxxciywbfywvlnoglekpqgsuosmgmvbc ; /usr/bin/python3'
Jan 26 09:41:23 compute-0 sudo[85901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:23 compute-0 python3[85903]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:23 compute-0 sshd-session[85504]: Connection closed by invalid user  129.212.186.155 port 36760 [preauth]
Jan 26 09:41:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Jan 26 09:41:23 compute-0 podman[85865]: 2026-01-26 09:41:23.9235647 +0000 UTC m=+0.963503460 container create b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:23 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Jan 26 09:41:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:23 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:23 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:24 compute-0 systemd[1]: Started libpod-conmon-b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5.scope.
Jan 26 09:41:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0121e83a5656e3e81d7d99cc48254a76ce99d80caa1a505004fc16806edc9bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0121e83a5656e3e81d7d99cc48254a76ce99d80caa1a505004fc16806edc9bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0121e83a5656e3e81d7d99cc48254a76ce99d80caa1a505004fc16806edc9bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0121e83a5656e3e81d7d99cc48254a76ce99d80caa1a505004fc16806edc9bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:24 compute-0 podman[85904]: 2026-01-26 09:41:23.99798791 +0000 UTC m=+0.158571848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:24 compute-0 podman[85904]: 2026-01-26 09:41:24.102001893 +0000 UTC m=+0.262585811 container create d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730 (image=quay.io/ceph/ceph:v19, name=sweet_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:24 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:24 compute-0 systemd[1]: Started libpod-conmon-d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730.scope.
Jan 26 09:41:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d7e0fdf0dab3f2097d4fa09fc32b44abbeb7fed1541069ad129f3f03a33f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954d7e0fdf0dab3f2097d4fa09fc32b44abbeb7fed1541069ad129f3f03a33f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:24 compute-0 podman[85865]: 2026-01-26 09:41:24.4691787 +0000 UTC m=+1.509117450 container init b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:41:24 compute-0 podman[85865]: 2026-01-26 09:41:24.478456146 +0000 UTC m=+1.518394896 container start b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 09:41:24 compute-0 ceph-mon[74456]: pgmap v93: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:24 compute-0 ceph-mon[74456]: osdmap e22: 3 total, 2 up, 3 in
Jan 26 09:41:24 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]: {
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:     "0": [
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:         {
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "devices": [
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "/dev/loop3"
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             ],
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "lv_name": "ceph_lv0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "lv_size": "21470642176",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "name": "ceph_lv0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "tags": {
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.cluster_name": "ceph",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.crush_device_class": "",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.encrypted": "0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.osd_id": "0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.type": "block",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.vdo": "0",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:                 "ceph.with_tpm": "0"
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             },
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "type": "block",
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:             "vg_name": "ceph_vg0"
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:         }
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]:     ]
Jan 26 09:41:24 compute-0 dazzling_bhaskara[85921]: }
Jan 26 09:41:24 compute-0 podman[85865]: 2026-01-26 09:41:24.781505826 +0000 UTC m=+1.821444546 container attach b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:41:24 compute-0 systemd[1]: libpod-b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5.scope: Deactivated successfully.
Jan 26 09:41:24 compute-0 podman[85865]: 2026-01-26 09:41:24.798902966 +0000 UTC m=+1.838841686 container died b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:41:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 26 09:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0121e83a5656e3e81d7d99cc48254a76ce99d80caa1a505004fc16806edc9bd-merged.mount: Deactivated successfully.
Jan 26 09:41:25 compute-0 podman[85865]: 2026-01-26 09:41:25.048216464 +0000 UTC m=+2.088155204 container remove b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:41:25 compute-0 systemd[1]: libpod-conmon-b337bf9eb17cfda06796125de2afcb4cac0ee56419495ecf030b4396e4d0e7e5.scope: Deactivated successfully.
Jan 26 09:41:25 compute-0 sudo[85744]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:25 compute-0 podman[85904]: 2026-01-26 09:41:25.127707548 +0000 UTC m=+1.288291496 container init d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730 (image=quay.io/ceph/ceph:v19, name=sweet_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 09:41:25 compute-0 podman[85904]: 2026-01-26 09:41:25.133909873 +0000 UTC m=+1.294493791 container start d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730 (image=quay.io/ceph/ceph:v19, name=sweet_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:25 compute-0 sudo[85947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:25 compute-0 sudo[85947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:25 compute-0 sudo[85947]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:25 compute-0 podman[85904]: 2026-01-26 09:41:25.18935629 +0000 UTC m=+1.349940258 container attach d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730 (image=quay.io/ceph/ceph:v19, name=sweet_kalam, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:41:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Jan 26 09:41:25 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Jan 26 09:41:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:25 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:25 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:25 compute-0 sudo[85973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:41:25 compute-0 sudo[85973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 26 09:41:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1985194690' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:25 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti started
Jan 26 09:41:25 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mgr.compute-1.xammti 192.168.122.101:0/1801712309; not ready for session (expect reconnect)
Jan 26 09:41:25 compute-0 podman[86059]: 2026-01-26 09:41:25.619677409 +0000 UTC m=+0.044933250 container create 2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:41:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v96: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:25 compute-0 systemd[1]: Started libpod-conmon-2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba.scope.
Jan 26 09:41:25 compute-0 podman[86059]: 2026-01-26 09:41:25.596432183 +0000 UTC m=+0.021688094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:25 compute-0 podman[86059]: 2026-01-26 09:41:25.73610158 +0000 UTC m=+0.161357441 container init 2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:25 compute-0 podman[86059]: 2026-01-26 09:41:25.742965952 +0000 UTC m=+0.168221793 container start 2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:41:25 compute-0 eager_mestorf[86075]: 167 167
Jan 26 09:41:25 compute-0 systemd[1]: libpod-2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba.scope: Deactivated successfully.
Jan 26 09:41:25 compute-0 podman[86059]: 2026-01-26 09:41:25.755982826 +0000 UTC m=+0.181238667 container attach 2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mestorf, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:25 compute-0 podman[86059]: 2026-01-26 09:41:25.756716915 +0000 UTC m=+0.181972766 container died 2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d27cdad9dd9625eb0164001d6bb27f57dbea2bb34b3bc9a166e62eefc4e336a-merged.mount: Deactivated successfully.
Jan 26 09:41:25 compute-0 podman[86059]: 2026-01-26 09:41:25.796538079 +0000 UTC m=+0.221793920 container remove 2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 09:41:25 compute-0 systemd[1]: libpod-conmon-2dc19250f5d7ff4cc6eaee2ea61dcc215ca68937cfd4caff7c331c70b746a1ba.scope: Deactivated successfully.
Jan 26 09:41:26 compute-0 podman[86099]: 2026-01-26 09:41:25.949982491 +0000 UTC m=+0.024653214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:26 compute-0 podman[86099]: 2026-01-26 09:41:26.320898397 +0000 UTC m=+0.395569120 container create fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 26 09:41:26 compute-0 ceph-mon[74456]: osdmap e23: 3 total, 2 up, 3 in
Jan 26 09:41:26 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1985194690' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 09:41:26 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti started
Jan 26 09:41:26 compute-0 ceph-mon[74456]: pgmap v96: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:26 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from mgr.compute-1.xammti 192.168.122.101:0/1801712309; not ready for session (expect reconnect)
Jan 26 09:41:26 compute-0 systemd[1]: Started libpod-conmon-fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784.scope.
Jan 26 09:41:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178cb3f8fb6cc579b8f15c31c71b4746402285ca63019a197272e86fbdc87f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178cb3f8fb6cc579b8f15c31c71b4746402285ca63019a197272e86fbdc87f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178cb3f8fb6cc579b8f15c31c71b4746402285ca63019a197272e86fbdc87f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0178cb3f8fb6cc579b8f15c31c71b4746402285ca63019a197272e86fbdc87f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1985194690' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Jan 26 09:41:26 compute-0 sweet_kalam[85926]: pool 'cephfs.cephfs.data' created
Jan 26 09:41:26 compute-0 systemd[1]: libpod-d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730.scope: Deactivated successfully.
Jan 26 09:41:26 compute-0 podman[86099]: 2026-01-26 09:41:26.694386131 +0000 UTC m=+0.769056854 container init fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 09:41:26 compute-0 podman[86099]: 2026-01-26 09:41:26.706184154 +0000 UTC m=+0.780854847 container start fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:41:26 compute-0 podman[85904]: 2026-01-26 09:41:26.706348558 +0000 UTC m=+2.866932476 container died d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730 (image=quay.io/ceph/ceph:v19, name=sweet_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:41:26 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.zllcia(active, since 2m), standbys: compute-2.oynaeu, compute-1.xammti
Jan 26 09:41:26 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Jan 26 09:41:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"} v 0)
Jan 26 09:41:26 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:41:26 compute-0 podman[86099]: 2026-01-26 09:41:26.729703735 +0000 UTC m=+0.804374428 container attach fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 09:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-954d7e0fdf0dab3f2097d4fa09fc32b44abbeb7fed1541069ad129f3f03a33f7-merged.mount: Deactivated successfully.
Jan 26 09:41:26 compute-0 podman[85904]: 2026-01-26 09:41:26.802353258 +0000 UTC m=+2.962937176 container remove d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730 (image=quay.io/ceph/ceph:v19, name=sweet_kalam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:41:26 compute-0 systemd[1]: libpod-conmon-d7576cc07907e787f146e6c0245ec1340d82b28b83d67c326c8962ee8b1b6730.scope: Deactivated successfully.
Jan 26 09:41:26 compute-0 sudo[85901]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 26 09:41:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 26 09:41:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:26 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 26 09:41:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 26 09:41:26 compute-0 sudo[86172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjimgsayelhtcpdiarwfnhdldrxcnrfy ; /usr/bin/python3'
Jan 26 09:41:26 compute-0 sudo[86172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:27 compute-0 python3[86174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:41:27 compute-0 podman[86203]: 2026-01-26 09:41:27.221137771 +0000 UTC m=+0.056580378 container create 2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc (image=quay.io/ceph/ceph:v19, name=strange_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:27 compute-0 systemd[1]: Started libpod-conmon-2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc.scope.
Jan 26 09:41:27 compute-0 podman[86203]: 2026-01-26 09:41:27.186986758 +0000 UTC m=+0.022429385 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472870a5957dbe3ce206d7fedf83f0a5cd7e3b20c84a3aaff58893acd0f4d729/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472870a5957dbe3ce206d7fedf83f0a5cd7e3b20c84a3aaff58893acd0f4d729/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:27 compute-0 lvm[86245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:41:27 compute-0 lvm[86245]: VG ceph_vg0 finished
Jan 26 09:41:27 compute-0 elegant_hellman[86115]: {}
Jan 26 09:41:27 compute-0 systemd[1]: libpod-fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784.scope: Deactivated successfully.
Jan 26 09:41:27 compute-0 systemd[1]: libpod-fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784.scope: Consumed 1.176s CPU time.
Jan 26 09:41:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v98: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:27 compute-0 podman[86203]: 2026-01-26 09:41:27.781798521 +0000 UTC m=+0.617241218 container init 2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc (image=quay.io/ceph/ceph:v19, name=strange_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 26 09:41:27 compute-0 podman[86203]: 2026-01-26 09:41:27.794772954 +0000 UTC m=+0.630215581 container start 2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc (image=quay.io/ceph/ceph:v19, name=strange_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:41:28 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1985194690' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 09:41:28 compute-0 ceph-mon[74456]: mgrmap e11: compute-0.zllcia(active, since 2m), standbys: compute-2.oynaeu, compute-1.xammti
Jan 26 09:41:28 compute-0 ceph-mon[74456]: osdmap e24: 3 total, 2 up, 3 in
Jan 26 09:41:28 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:28 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:41:28 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 26 09:41:28 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:28 compute-0 ceph-mon[74456]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:28 compute-0 podman[86203]: 2026-01-26 09:41:28.110950611 +0000 UTC m=+0.946393218 container attach 2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc (image=quay.io/ceph/ceph:v19, name=strange_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 09:41:28 compute-0 podman[86099]: 2026-01-26 09:41:28.16948547 +0000 UTC m=+2.244156163 container died fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:41:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 26 09:41:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2556293450' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 26 09:41:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Jan 26 09:41:28 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Jan 26 09:41:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:28 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:28 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0178cb3f8fb6cc579b8f15c31c71b4746402285ca63019a197272e86fbdc87f8-merged.mount: Deactivated successfully.
Jan 26 09:41:28 compute-0 podman[86099]: 2026-01-26 09:41:28.819508833 +0000 UTC m=+2.894179526 container remove fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:28 compute-0 systemd[1]: libpod-conmon-fe601f4d6f708b840eeb5a8028ddfb96f5450da75632165c8abed35af6add784.scope: Deactivated successfully.
Jan 26 09:41:28 compute-0 sudo[85973]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:41:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:41:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 26 09:41:29 compute-0 ceph-mon[74456]: Deploying daemon osd.2 on compute-2
Jan 26 09:41:29 compute-0 ceph-mon[74456]: pgmap v98: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:29 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2556293450' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 26 09:41:29 compute-0 ceph-mon[74456]: osdmap e25: 3 total, 2 up, 3 in
Jan 26 09:41:29 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:29 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2556293450' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 26 09:41:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Jan 26 09:41:30 compute-0 strange_archimedes[86238]: enabled application 'rbd' on pool 'vms'
Jan 26 09:41:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Jan 26 09:41:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:30 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:30 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:30 compute-0 systemd[1]: libpod-2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc.scope: Deactivated successfully.
Jan 26 09:41:30 compute-0 podman[86203]: 2026-01-26 09:41:30.063925537 +0000 UTC m=+2.899368194 container died 2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc (image=quay.io/ceph/ceph:v19, name=strange_archimedes, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-472870a5957dbe3ce206d7fedf83f0a5cd7e3b20c84a3aaff58893acd0f4d729-merged.mount: Deactivated successfully.
Jan 26 09:41:30 compute-0 podman[86203]: 2026-01-26 09:41:30.737033252 +0000 UTC m=+3.572475889 container remove 2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc (image=quay.io/ceph/ceph:v19, name=strange_archimedes, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:30 compute-0 systemd[1]: libpod-conmon-2b50d98874d3e607188779e58b6728fc32fc9b7fabc324591f40fd510b4ebbfc.scope: Deactivated successfully.
Jan 26 09:41:30 compute-0 sudo[86172]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:30 compute-0 sudo[86319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqdiyvlgeqxaonknbpxpbvekyelnlnpw ; /usr/bin/python3'
Jan 26 09:41:30 compute-0 sudo[86319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:30 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:30 compute-0 ceph-mon[74456]: pgmap v100: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:30 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2556293450' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 26 09:41:30 compute-0 ceph-mon[74456]: osdmap e26: 3 total, 2 up, 3 in
Jan 26 09:41:30 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:31 compute-0 python3[86321]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:31 compute-0 podman[86322]: 2026-01-26 09:41:31.090025163 +0000 UTC m=+0.057069231 container create d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c (image=quay.io/ceph/ceph:v19, name=compassionate_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:31 compute-0 systemd[1]: Started libpod-conmon-d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c.scope.
Jan 26 09:41:31 compute-0 podman[86322]: 2026-01-26 09:41:31.061341804 +0000 UTC m=+0.028385952 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d50f1c341283df6f5f4022cbd77aa7025b9b52d9346040e7ce1e4a6b40d442a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d50f1c341283df6f5f4022cbd77aa7025b9b52d9346040e7ce1e4a6b40d442a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:41:31 compute-0 podman[86322]: 2026-01-26 09:41:31.3264321 +0000 UTC m=+0.293476178 container init d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c (image=quay.io/ceph/ceph:v19, name=compassionate_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:41:31 compute-0 podman[86322]: 2026-01-26 09:41:31.33774875 +0000 UTC m=+0.304792838 container start d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c (image=quay.io/ceph/ceph:v19, name=compassionate_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:41:31 compute-0 podman[86322]: 2026-01-26 09:41:31.377715097 +0000 UTC m=+0.344759175 container attach d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c (image=quay.io/ceph/ceph:v19, name=compassionate_heisenberg, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v102: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 26 09:41:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1132920361' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 26 09:41:32 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:41:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 26 09:41:32 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:32 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:32 compute-0 ceph-mon[74456]: pgmap v102: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1132920361' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 26 09:41:32 compute-0 ceph-mon[74456]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1132920361' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 26 09:41:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Jan 26 09:41:32 compute-0 compassionate_heisenberg[86337]: enabled application 'rbd' on pool 'volumes'
Jan 26 09:41:32 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Jan 26 09:41:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:32 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:32 compute-0 systemd[1]: libpod-d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c.scope: Deactivated successfully.
Jan 26 09:41:32 compute-0 podman[86322]: 2026-01-26 09:41:32.761141729 +0000 UTC m=+1.728185817 container died d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c (image=quay.io/ceph/ceph:v19, name=compassionate_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d50f1c341283df6f5f4022cbd77aa7025b9b52d9346040e7ce1e4a6b40d442a-merged.mount: Deactivated successfully.
Jan 26 09:41:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:41:33 compute-0 podman[86322]: 2026-01-26 09:41:33.382717209 +0000 UTC m=+2.349761277 container remove d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c (image=quay.io/ceph/ceph:v19, name=compassionate_heisenberg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:41:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 26 09:41:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 26 09:41:33 compute-0 sudo[86319]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:41:33 compute-0 systemd[1]: libpod-conmon-d68b4d8ea991c44896d762b6bf91fe578c2ad07c555456d2e003c26679c2622c.scope: Deactivated successfully.
Jan 26 09:41:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:33 compute-0 sudo[86375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:41:33 compute-0 sudo[86375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:33 compute-0 sudo[86420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxyzywecykuivugsfckuatxdhrgovyqw ; /usr/bin/python3'
Jan 26 09:41:33 compute-0 sudo[86420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:33 compute-0 sudo[86375]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v104: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:33 compute-0 python3[86424]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:33 compute-0 sudo[86431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:33 compute-0 sudo[86431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:33 compute-0 sudo[86431]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 26 09:41:33 compute-0 podman[86425]: 2026-01-26 09:41:33.753584184 +0000 UTC m=+0.085943155 container create c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9 (image=quay.io/ceph/ceph:v19, name=dazzling_chandrasekhar, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 26 09:41:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Jan 26 09:41:33 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Jan 26 09:41:33 compute-0 podman[86425]: 2026-01-26 09:41:33.689119648 +0000 UTC m=+0.021478649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:33 compute-0 systemd[1]: Started libpod-conmon-c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9.scope.
Jan 26 09:41:33 compute-0 sudo[86463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:41:33 compute-0 sudo[86463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabb405c34707404eb96c6154c29870018b0aff94238f9558fa462790c43debf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabb405c34707404eb96c6154c29870018b0aff94238f9558fa462790c43debf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Jan 26 09:41:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 26 09:41:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e28 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Jan 26 09:41:34 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:34 compute-0 podman[86425]: 2026-01-26 09:41:34.017135169 +0000 UTC m=+0.349494170 container init c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9 (image=quay.io/ceph/ceph:v19, name=dazzling_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:34 compute-0 podman[86425]: 2026-01-26 09:41:34.024048742 +0000 UTC m=+0.356407713 container start c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9 (image=quay.io/ceph/ceph:v19, name=dazzling_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:41:34 compute-0 podman[86425]: 2026-01-26 09:41:34.078701169 +0000 UTC m=+0.411060170 container attach c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9 (image=quay.io/ceph/ceph:v19, name=dazzling_chandrasekhar, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:41:34 compute-0 sudo[86463]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 26 09:41:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2154943776' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 26 09:41:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1132920361' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 26 09:41:34 compute-0 ceph-mon[74456]: osdmap e27: 3 total, 2 up, 3 in
Jan 26 09:41:34 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:34 compute-0 ceph-mon[74456]: from='osd.2 [v2:192.168.122.102:6800/4046341804,v1:192.168.122.102:6801/4046341804]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 26 09:41:34 compute-0 ceph-mon[74456]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 26 09:41:34 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:34 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 26 09:41:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 26 09:41:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2154943776' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 26 09:41:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Jan 26 09:41:35 compute-0 dazzling_chandrasekhar[86489]: enabled application 'rbd' on pool 'backups'
Jan 26 09:41:35 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4046341804; not ready for session (expect reconnect)
Jan 26 09:41:35 compute-0 systemd[1]: libpod-c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9.scope: Deactivated successfully.
Jan 26 09:41:35 compute-0 podman[86425]: 2026-01-26 09:41:35.361890889 +0000 UTC m=+1.694249880 container died c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9 (image=quay.io/ceph/ceph:v19, name=dazzling_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 09:41:35 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Jan 26 09:41:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:41:35 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:35 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=10.031669617s) [] r=-1 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active pruub 83.365829468s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:41:35 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=10.031669617s) [] r=-1 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.365829468s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:41:35 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=18/20 n=0 ec=18/18 lis/c=18/18 les/c/f=20/20/0 sis=29 pruub=9.866729736s) [] r=-1 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 active pruub 83.201187134s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:41:35 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=18/20 n=0 ec=18/18 lis/c=18/18 les/c/f=20/20/0 sis=29 pruub=9.866729736s) [] r=-1 lpr=29 pi=[18,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.201187134s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:41:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:41:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cabb405c34707404eb96c6154c29870018b0aff94238f9558fa462790c43debf-merged.mount: Deactivated successfully.
Jan 26 09:41:35 compute-0 ceph-mon[74456]: pgmap v104: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 26 09:41:35 compute-0 ceph-mon[74456]: osdmap e28: 3 total, 2 up, 3 in
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='osd.2 [v2:192.168.122.102:6800/4046341804,v1:192.168.122.102:6801/4046341804]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2154943776' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2154943776' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 26 09:41:35 compute-0 ceph-mon[74456]: osdmap e29: 3 total, 2 up, 3 in
Jan 26 09:41:35 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:41:35 compute-0 podman[86425]: 2026-01-26 09:41:35.980717057 +0000 UTC m=+2.313076028 container remove c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9 (image=quay.io/ceph/ceph:v19, name=dazzling_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:36 compute-0 sudo[86420]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:36 compute-0 systemd[1]: libpod-conmon-c83a552e5865391ca139e2f2b8dafbedafbcfe013cf20b4ae268da0f18a6bff9.scope: Deactivated successfully.
Jan 26 09:41:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:36 compute-0 sudo[86582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brsbntsbyxkfhjmdvzjnofdygjyzixwv ; /usr/bin/python3'
Jan 26 09:41:36 compute-0 sudo[86582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:36 compute-0 python3[86584]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:36 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4046341804; not ready for session (expect reconnect)
Jan 26 09:41:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:36 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:36 compute-0 podman[86585]: 2026-01-26 09:41:36.378367471 +0000 UTC m=+0.063837401 container create 811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007 (image=quay.io/ceph/ceph:v19, name=suspicious_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:36 compute-0 systemd[1]: Started libpod-conmon-811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007.scope.
Jan 26 09:41:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:36 compute-0 podman[86585]: 2026-01-26 09:41:36.36058218 +0000 UTC m=+0.046052150 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c76d2e1f1871f63e5a10a39b868b9c0f0f9fce50134af0772a2a45883523e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c76d2e1f1871f63e5a10a39b868b9c0f0f9fce50134af0772a2a45883523e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:36 compute-0 podman[86585]: 2026-01-26 09:41:36.495547391 +0000 UTC m=+0.181017351 container init 811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007 (image=quay.io/ceph/ceph:v19, name=suspicious_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:36 compute-0 podman[86585]: 2026-01-26 09:41:36.501657163 +0000 UTC m=+0.187127113 container start 811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007 (image=quay.io/ceph/ceph:v19, name=suspicious_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:41:36 compute-0 podman[86585]: 2026-01-26 09:41:36.608343887 +0000 UTC m=+0.293813867 container attach 811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007 (image=quay.io/ceph/ceph:v19, name=suspicious_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:41:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 26 09:41:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817412888' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 26 09:41:37 compute-0 ceph-mon[74456]: purged_snaps scrub starts
Jan 26 09:41:37 compute-0 ceph-mon[74456]: purged_snaps scrub ok
Jan 26 09:41:37 compute-0 ceph-mon[74456]: pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:37 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:37 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:37 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:37 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:37 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2817412888' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 26 09:41:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:41:37 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4046341804; not ready for session (expect reconnect)
Jan 26 09:41:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:37 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 26 09:41:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2817412888' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 26 09:41:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Jan 26 09:41:37 compute-0 suspicious_williams[86600]: enabled application 'rbd' on pool 'images'
Jan 26 09:41:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Jan 26 09:41:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:37 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:37 compute-0 systemd[1]: libpod-811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007.scope: Deactivated successfully.
Jan 26 09:41:37 compute-0 podman[86585]: 2026-01-26 09:41:37.478812204 +0000 UTC m=+1.164282164 container died 811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007 (image=quay.io/ceph/ceph:v19, name=suspicious_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:41:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c76d2e1f1871f63e5a10a39b868b9c0f0f9fce50134af0772a2a45883523e5-merged.mount: Deactivated successfully.
Jan 26 09:41:37 compute-0 podman[86585]: 2026-01-26 09:41:37.873041377 +0000 UTC m=+1.558511347 container remove 811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007 (image=quay.io/ceph/ceph:v19, name=suspicious_williams, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:37 compute-0 systemd[1]: libpod-conmon-811d36e7538e5f53bbe2ba997061748d8705ec5eb2934fa091df6f0639278007.scope: Deactivated successfully.
Jan 26 09:41:37 compute-0 sudo[86582]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:38 compute-0 sudo[86661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umrzgymfgbfcketbtyrfeypebgzbewcg ; /usr/bin/python3'
Jan 26 09:41:38 compute-0 sudo[86661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:38 compute-0 ceph-mon[74456]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:38 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2817412888' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 26 09:41:38 compute-0 ceph-mon[74456]: osdmap e30: 3 total, 2 up, 3 in
Jan 26 09:41:38 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:38 compute-0 ceph-mon[74456]: pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:38 compute-0 python3[86663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:38 compute-0 podman[86664]: 2026-01-26 09:41:38.330124844 +0000 UTC m=+0.101045706 container create 30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091 (image=quay.io/ceph/ceph:v19, name=elated_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:38 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4046341804; not ready for session (expect reconnect)
Jan 26 09:41:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:38 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:38 compute-0 podman[86664]: 2026-01-26 09:41:38.255744675 +0000 UTC m=+0.026665537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:38 compute-0 systemd[1]: Started libpod-conmon-30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091.scope.
Jan 26 09:41:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58566e7d628bdb8907dc811753157471c149c17a39408fc5146a041eaa4647a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58566e7d628bdb8907dc811753157471c149c17a39408fc5146a041eaa4647a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:38 compute-0 podman[86664]: 2026-01-26 09:41:38.791047802 +0000 UTC m=+0.561968705 container init 30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091 (image=quay.io/ceph/ceph:v19, name=elated_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Jan 26 09:41:38 compute-0 podman[86664]: 2026-01-26 09:41:38.800114522 +0000 UTC m=+0.571035344 container start 30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091 (image=quay.io/ceph/ceph:v19, name=elated_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 09:41:38 compute-0 podman[86664]: 2026-01-26 09:41:38.885111722 +0000 UTC m=+0.656032574 container attach 30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091 (image=quay.io/ceph/ceph:v19, name=elated_einstein, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3209549506' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 26 09:41:39 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4046341804; not ready for session (expect reconnect)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:41:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:41:39 compute-0 sudo[86705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:41:39 compute-0 sudo[86705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:39 compute-0 sudo[86705]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v110: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:39 compute-0 sudo[86730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:41:39 compute-0 sudo[86730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:39 compute-0 sudo[86730]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:39 compute-0 sudo[86755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:41:39 compute-0 sudo[86755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:39 compute-0 sudo[86755]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:39 compute-0 sudo[86780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:39 compute-0 sudo[86780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:39 compute-0 sudo[86780]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 26 09:41:39 compute-0 sudo[86805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:41:39 compute-0 sudo[86805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:39 compute-0 sudo[86805]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:39 compute-0 sudo[86853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:41:39 compute-0 sudo[86853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:39 compute-0 sudo[86853]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 sudo[86878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:41:40 compute-0 sudo[86878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[86878]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:40 compute-0 sudo[86903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 26 09:41:40 compute-0 sudo[86903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[86903]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:40 compute-0 sudo[86928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:41:40 compute-0 sudo[86928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[86928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3209549506' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 26 09:41:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Jan 26 09:41:40 compute-0 elated_einstein[86680]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 26 09:41:40 compute-0 sudo[86953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:41:40 compute-0 systemd[1]: libpod-30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091.scope: Deactivated successfully.
Jan 26 09:41:40 compute-0 podman[86664]: 2026-01-26 09:41:40.260566823 +0000 UTC m=+2.031487645 container died 30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091 (image=quay.io/ceph/ceph:v19, name=elated_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 09:41:40 compute-0 sudo[86953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Jan 26 09:41:40 compute-0 sudo[86953]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 sudo[86980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:41:40 compute-0 sudo[86980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[86980]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4046341804; not ready for session (expect reconnect)
Jan 26 09:41:40 compute-0 sudo[87015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:40 compute-0 sudo[87015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[87015]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 sudo[87040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:41:40 compute-0 sudo[87040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[87040]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 sudo[87088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:41:40 compute-0 sudo[87088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[87088]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 sudo[87113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:41:40 compute-0 sudo[87113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[87113]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 sudo[87138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:40 compute-0 sudo[87138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:40 compute-0 sudo[87138]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:41:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:41:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:41:40 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 26 09:41:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3209549506' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 26 09:41:41 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:41 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:41 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:41 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:41:41 compute-0 ceph-mon[74456]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 26 09:41:41 compute-0 ceph-mon[74456]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:41:41 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:41 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:41:41 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:41:41 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:41:41 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:41:41 compute-0 ceph-mon[74456]: pgmap v110: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:41 compute-0 ceph-mgr[74755]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4046341804; not ready for session (expect reconnect)
Jan 26 09:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f58566e7d628bdb8907dc811753157471c149c17a39408fc5146a041eaa4647a-merged.mount: Deactivated successfully.
Jan 26 09:41:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v112: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:41 compute-0 ceph-mgr[74755]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 09:41:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 26 09:41:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:41 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/4046341804,v1:192.168.122.102:6801/4046341804] boot
Jan 26 09:41:41 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 26 09:41:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:41:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:41:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=32 pruub=3.876931667s) [2] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.365829468s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:41:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=18/20 n=0 ec=18/18 lis/c=18/18 les/c/f=20/20/0 sis=32 pruub=3.712302685s) [2] r=-1 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.201187134s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:41:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=32 pruub=3.876894712s) [2] r=-1 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.365829468s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:41:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=18/20 n=0 ec=18/18 lis/c=18/18 les/c/f=20/20/0 sis=32 pruub=3.712205172s) [2] r=-1 lpr=32 pi=[18,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.201187134s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:41:41 compute-0 podman[86664]: 2026-01-26 09:41:41.845610562 +0000 UTC m=+3.616531414 container remove 30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091 (image=quay.io/ceph/ceph:v19, name=elated_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:41:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:41:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:41 compute-0 sudo[86661]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:41 compute-0 systemd[1]: libpod-conmon-30974f51308bd77b52df8c91fc5aaebfe364f5b4df73fca825cb69a5af00c091.scope: Deactivated successfully.
Jan 26 09:41:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:41:42 compute-0 sudo[87187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izzjjcrqhitgtpyokwuvyprllgdseeap ; /usr/bin/python3'
Jan 26 09:41:42 compute-0 sudo[87187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 python3[87189]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:42 compute-0 podman[87190]: 2026-01-26 09:41:42.211828354 +0000 UTC m=+0.031093893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:41:42 compute-0 podman[87190]: 2026-01-26 09:41:42.633097223 +0000 UTC m=+0.452362722 container create 5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26 (image=quay.io/ceph/ceph:v19, name=angry_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 ceph-mon[74456]: OSD bench result of 4662.749970 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 09:41:42 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:42 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:42 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3209549506' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 26 09:41:42 compute-0 ceph-mon[74456]: osdmap e31: 3 total, 2 up, 3 in
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:42 compute-0 ceph-mon[74456]: pgmap v112: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 ceph-mon[74456]: osd.2 [v2:192.168.122.102:6800/4046341804,v1:192.168.122.102:6801/4046341804] boot
Jan 26 09:41:42 compute-0 ceph-mon[74456]: osdmap e32: 3 total, 3 up, 3 in
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 systemd[1]: Started libpod-conmon-5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26.scope.
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:41:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebcd192fbcf7b735f614bd4c45098a738e89e790adfc292e140f7af9506562b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebcd192fbcf7b735f614bd4c45098a738e89e790adfc292e140f7af9506562b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:42 compute-0 podman[87190]: 2026-01-26 09:41:42.818744046 +0000 UTC m=+0.638009635 container init 5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26 (image=quay.io/ceph/ceph:v19, name=angry_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:41:42 compute-0 podman[87190]: 2026-01-26 09:41:42.831063832 +0000 UTC m=+0.650329361 container start 5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26 (image=quay.io/ceph/ceph:v19, name=angry_roentgen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:41:42 compute-0 podman[87190]: 2026-01-26 09:41:42.841297724 +0000 UTC m=+0.660563263 container attach 5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26 (image=quay.io/ceph/ceph:v19, name=angry_roentgen, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 26 09:41:42 compute-0 sudo[87209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:42 compute-0 sudo[87209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:42 compute-0 sudo[87209]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:42 compute-0 sudo[87234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:41:42 compute-0 sudo[87234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 26 09:41:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3224045909' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 26 09:41:43 compute-0 podman[87321]: 2026-01-26 09:41:43.373049396 +0000 UTC m=+0.025179497 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v114: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:43 compute-0 podman[87321]: 2026-01-26 09:41:43.742505314 +0000 UTC m=+0.394635415 container create 16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:43 compute-0 systemd[1]: Started libpod-conmon-16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9.scope.
Jan 26 09:41:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:44 compute-0 podman[87321]: 2026-01-26 09:41:44.069400696 +0000 UTC m=+0.721530817 container init 16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 09:41:44 compute-0 podman[87321]: 2026-01-26 09:41:44.079311218 +0000 UTC m=+0.731441319 container start 16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:44 compute-0 sharp_ardinghelli[87337]: 167 167
Jan 26 09:41:44 compute-0 systemd[1]: libpod-16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9.scope: Deactivated successfully.
Jan 26 09:41:44 compute-0 conmon[87337]: conmon 16df54914981793edd11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9.scope/container/memory.events
Jan 26 09:41:44 compute-0 podman[87321]: 2026-01-26 09:41:44.086508789 +0000 UTC m=+0.738638920 container attach 16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:44 compute-0 podman[87321]: 2026-01-26 09:41:44.088035909 +0000 UTC m=+0.740166050 container died 16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 26 09:41:44 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 26 09:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ff6beb0054cd4258127092b91fc022b11dd6c1ec2e1868fd7e9b93d55cfc9f-merged.mount: Deactivated successfully.
Jan 26 09:41:44 compute-0 ceph-mon[74456]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 09:41:44 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:44 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:44 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:44 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:41:44 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:41:44 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3224045909' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 26 09:41:44 compute-0 podman[87321]: 2026-01-26 09:41:44.755827822 +0000 UTC m=+1.407957923 container remove 16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_ardinghelli, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:44 compute-0 systemd[1]: libpod-conmon-16df54914981793edd112302b5f3daa3f0b19eed4d9d353ca6229658e52c76f9.scope: Deactivated successfully.
Jan 26 09:41:44 compute-0 podman[87361]: 2026-01-26 09:41:44.90050438 +0000 UTC m=+0.024910600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:45 compute-0 podman[87361]: 2026-01-26 09:41:45.083485324 +0000 UTC m=+0.207891524 container create 4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_margulis, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:45 compute-0 systemd[1]: Started libpod-conmon-4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab.scope.
Jan 26 09:41:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be394ecf2a62c9f1054fd70d93c3aeb8caa2c48103d86999d85144857456c70b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be394ecf2a62c9f1054fd70d93c3aeb8caa2c48103d86999d85144857456c70b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be394ecf2a62c9f1054fd70d93c3aeb8caa2c48103d86999d85144857456c70b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be394ecf2a62c9f1054fd70d93c3aeb8caa2c48103d86999d85144857456c70b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be394ecf2a62c9f1054fd70d93c3aeb8caa2c48103d86999d85144857456c70b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 26 09:41:45 compute-0 podman[87361]: 2026-01-26 09:41:45.416945969 +0000 UTC m=+0.541352189 container init 4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_margulis, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:41:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3224045909' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 26 09:41:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 26 09:41:45 compute-0 angry_roentgen[87205]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 26 09:41:45 compute-0 podman[87361]: 2026-01-26 09:41:45.430771205 +0000 UTC m=+0.555177405 container start 4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_margulis, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:45 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 26 09:41:45 compute-0 podman[87361]: 2026-01-26 09:41:45.436776873 +0000 UTC m=+0.561183103 container attach 4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:41:45 compute-0 systemd[1]: libpod-5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26.scope: Deactivated successfully.
Jan 26 09:41:45 compute-0 podman[87190]: 2026-01-26 09:41:45.453799594 +0000 UTC m=+3.273065093 container died 5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26 (image=quay.io/ceph/ceph:v19, name=angry_roentgen, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-eebcd192fbcf7b735f614bd4c45098a738e89e790adfc292e140f7af9506562b-merged.mount: Deactivated successfully.
Jan 26 09:41:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v117: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:45 compute-0 friendly_margulis[87377]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:41:45 compute-0 friendly_margulis[87377]: --> All data devices are unavailable
Jan 26 09:41:45 compute-0 systemd[1]: libpod-4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab.scope: Deactivated successfully.
Jan 26 09:41:45 compute-0 conmon[87377]: conmon 4f52279e3939fd8e889e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab.scope/container/memory.events
Jan 26 09:41:45 compute-0 ceph-mon[74456]: pgmap v114: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:45 compute-0 ceph-mon[74456]: osdmap e33: 3 total, 3 up, 3 in
Jan 26 09:41:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3224045909' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 26 09:41:45 compute-0 ceph-mon[74456]: osdmap e34: 3 total, 3 up, 3 in
Jan 26 09:41:46 compute-0 podman[87190]: 2026-01-26 09:41:46.001143679 +0000 UTC m=+3.820409178 container remove 5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26 (image=quay.io/ceph/ceph:v19, name=angry_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:46 compute-0 systemd[1]: libpod-conmon-5477b8a514f6f148d678b7dec19796f6d675761732a7d27806fb01abffc10e26.scope: Deactivated successfully.
Jan 26 09:41:46 compute-0 sudo[87187]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:46 compute-0 podman[87361]: 2026-01-26 09:41:46.068546143 +0000 UTC m=+1.192952353 container died 4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-be394ecf2a62c9f1054fd70d93c3aeb8caa2c48103d86999d85144857456c70b-merged.mount: Deactivated successfully.
Jan 26 09:41:46 compute-0 podman[87408]: 2026-01-26 09:41:46.412426475 +0000 UTC m=+0.664256931 container remove 4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_margulis, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:41:46 compute-0 systemd[1]: libpod-conmon-4f52279e3939fd8e889e4335e706c88db9f6e642b5babc9d18a03f90b71254ab.scope: Deactivated successfully.
Jan 26 09:41:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 09:41:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 09:41:46 compute-0 sudo[87234]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:46 compute-0 sudo[87424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:46 compute-0 sudo[87424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:46 compute-0 sudo[87424]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:46 compute-0 sudo[87449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:41:46 compute-0 sudo[87449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:41:46
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [balancer INFO root] Some PGs (0.285714) are inactive; try again later
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:41:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:41:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:41:46 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:41:47 compute-0 python3[87580]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:41:47 compute-0 podman[87588]: 2026-01-26 09:41:46.953692069 +0000 UTC m=+0.037118014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:47 compute-0 podman[87588]: 2026-01-26 09:41:47.146482711 +0000 UTC m=+0.229908666 container create 423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:47 compute-0 ceph-mon[74456]: pgmap v117: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:47 compute-0 ceph-mon[74456]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 09:41:47 compute-0 ceph-mon[74456]: Cluster is now healthy
Jan 26 09:41:47 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:47 compute-0 systemd[1]: Started libpod-conmon-423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2.scope.
Jan 26 09:41:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:47 compute-0 podman[87588]: 2026-01-26 09:41:47.251898421 +0000 UTC m=+0.335324416 container init 423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:41:47 compute-0 podman[87588]: 2026-01-26 09:41:47.259961034 +0000 UTC m=+0.343386959 container start 423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:47 compute-0 modest_lichterman[87628]: 167 167
Jan 26 09:41:47 compute-0 systemd[1]: libpod-423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2.scope: Deactivated successfully.
Jan 26 09:41:47 compute-0 podman[87588]: 2026-01-26 09:41:47.270401811 +0000 UTC m=+0.353827816 container attach 423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:41:47 compute-0 podman[87588]: 2026-01-26 09:41:47.270976166 +0000 UTC m=+0.354402121 container died 423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lichterman, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:41:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:41:47 compute-0 python3[87690]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420506.7190554-37315-272758716345521/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:41:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 26 09:41:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 26 09:41:47 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 26 09:41:47 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev ca0ab9be-c95d-49f2-971f-6cd171169440 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 26 09:41:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:41:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v119: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:41:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-54a5cf72d56527c7f501015b8be4fdc0c68be42504f4eaf98721ae3d5bc6b281-merged.mount: Deactivated successfully.
Jan 26 09:41:47 compute-0 sudo[87791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqotzfhliemeeshlmpambbsstomisxlr ; /usr/bin/python3'
Jan 26 09:41:47 compute-0 sudo[87791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:48 compute-0 python3[87793]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:41:48 compute-0 podman[87588]: 2026-01-26 09:41:48.120403466 +0000 UTC m=+1.203829421 container remove 423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:41:48 compute-0 sudo[87791]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:48 compute-0 systemd[1]: libpod-conmon-423169b8a67ac545fe2eb45830d9646d6f6e903bce3b96f21c1b101b20dabbc2.scope: Deactivated successfully.
Jan 26 09:41:48 compute-0 sudo[87887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intrkytysuwwpxgczolqdifxhvkttoaw ; /usr/bin/python3'
Jan 26 09:41:48 compute-0 sudo[87887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:48 compute-0 podman[87831]: 2026-01-26 09:41:48.268526797 +0000 UTC m=+0.022384834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:48 compute-0 python3[87889]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420507.807942-37330-241478618556983/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a6e79820c13efbc89487d8af8c5cef3b7f749579 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:41:48 compute-0 sudo[87887]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:48 compute-0 podman[87831]: 2026-01-26 09:41:48.600701018 +0000 UTC m=+0.354559035 container create e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 26 09:41:48 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:48 compute-0 ceph-mon[74456]: osdmap e35: 3 total, 3 up, 3 in
Jan 26 09:41:48 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:48 compute-0 ceph-mon[74456]: pgmap v119: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:48 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 26 09:41:48 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 26 09:41:48 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 84a59410-3f2a-4e3a-a1dc-042f4ebff04c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 26 09:41:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:41:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:48 compute-0 systemd[1]: Started libpod-conmon-e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500.scope.
Jan 26 09:41:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2658eaa185ad3cca67fa9b5ce83045dbad52a792e46e507d316b7e219551ed27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2658eaa185ad3cca67fa9b5ce83045dbad52a792e46e507d316b7e219551ed27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2658eaa185ad3cca67fa9b5ce83045dbad52a792e46e507d316b7e219551ed27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2658eaa185ad3cca67fa9b5ce83045dbad52a792e46e507d316b7e219551ed27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:48 compute-0 sudo[87944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjqwrtjucepukqmrppymndlayekrbavx ; /usr/bin/python3'
Jan 26 09:41:48 compute-0 sudo[87944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:48 compute-0 python3[87946]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:49 compute-0 podman[87831]: 2026-01-26 09:41:49.35084201 +0000 UTC m=+1.104700047 container init e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:49 compute-0 podman[87831]: 2026-01-26 09:41:49.36293315 +0000 UTC m=+1.116791187 container start e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v121: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:41:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 26 09:41:49 compute-0 podman[87831]: 2026-01-26 09:41:49.652053912 +0000 UTC m=+1.405911969 container attach e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:49 compute-0 keen_cray[87923]: {
Jan 26 09:41:49 compute-0 keen_cray[87923]:     "0": [
Jan 26 09:41:49 compute-0 keen_cray[87923]:         {
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "devices": [
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "/dev/loop3"
Jan 26 09:41:49 compute-0 keen_cray[87923]:             ],
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "lv_name": "ceph_lv0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "lv_size": "21470642176",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "name": "ceph_lv0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "tags": {
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.cluster_name": "ceph",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.crush_device_class": "",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.encrypted": "0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.osd_id": "0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.type": "block",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.vdo": "0",
Jan 26 09:41:49 compute-0 keen_cray[87923]:                 "ceph.with_tpm": "0"
Jan 26 09:41:49 compute-0 keen_cray[87923]:             },
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "type": "block",
Jan 26 09:41:49 compute-0 keen_cray[87923]:             "vg_name": "ceph_vg0"
Jan 26 09:41:49 compute-0 keen_cray[87923]:         }
Jan 26 09:41:49 compute-0 keen_cray[87923]:     ]
Jan 26 09:41:49 compute-0 keen_cray[87923]: }
Jan 26 09:41:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 26 09:41:49 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 26 09:41:49 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 04aa54ed-47d1-4631-96a3-061a5086b60c (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 26 09:41:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:41:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:49 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:49 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:49 compute-0 systemd[1]: libpod-e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500.scope: Deactivated successfully.
Jan 26 09:41:49 compute-0 ceph-mon[74456]: osdmap e36: 3 total, 3 up, 3 in
Jan 26 09:41:49 compute-0 conmon[87923]: conmon e8dafe36980178e9acca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500.scope/container/memory.events
Jan 26 09:41:49 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:49 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:49 compute-0 podman[87831]: 2026-01-26 09:41:49.706510953 +0000 UTC m=+1.460369040 container died e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2658eaa185ad3cca67fa9b5ce83045dbad52a792e46e507d316b7e219551ed27-merged.mount: Deactivated successfully.
Jan 26 09:41:50 compute-0 podman[87831]: 2026-01-26 09:41:50.377356637 +0000 UTC m=+2.131214644 container remove e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:50 compute-0 systemd[1]: libpod-conmon-e8dafe36980178e9acca8d6994e8bdda789867cc9bde606c301e3e21c4e6f500.scope: Deactivated successfully.
Jan 26 09:41:50 compute-0 sudo[87449]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:50 compute-0 sudo[87978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:50 compute-0 sudo[87978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:50 compute-0 sudo[87978]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:50 compute-0 podman[87947]: 2026-01-26 09:41:50.459416298 +0000 UTC m=+1.577453788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:50 compute-0 sudo[88005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:41:50 compute-0 sudo[88005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 26 09:41:50 compute-0 podman[87947]: 2026-01-26 09:41:50.917434241 +0000 UTC m=+2.035471751 container create 265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a (image=quay.io/ceph/ceph:v19, name=sleepy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 09:41:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 26 09:41:50 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 26 09:41:50 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev b9a8182f-4d76-4a68-b749-9e9056527e68 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 26 09:41:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:41:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:50 compute-0 ceph-mon[74456]: pgmap v121: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:50 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:50 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:50 compute-0 ceph-mon[74456]: osdmap e37: 3 total, 3 up, 3 in
Jan 26 09:41:50 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:50 compute-0 systemd[1]: Started libpod-conmon-265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a.scope.
Jan 26 09:41:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f264883e0179a184a2ecd1e731148fc132c8071477025360c6ac7c937b2ac513/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f264883e0179a184a2ecd1e731148fc132c8071477025360c6ac7c937b2ac513/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f264883e0179a184a2ecd1e731148fc132c8071477025360c6ac7c937b2ac513/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:51 compute-0 podman[87947]: 2026-01-26 09:41:51.304387171 +0000 UTC m=+2.422424671 container init 265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a (image=quay.io/ceph/ceph:v19, name=sleepy_babbage, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:51 compute-0 podman[87947]: 2026-01-26 09:41:51.310607755 +0000 UTC m=+2.428645225 container start 265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a (image=quay.io/ceph/ceph:v19, name=sleepy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:41:51 compute-0 podman[87947]: 2026-01-26 09:41:51.317625351 +0000 UTC m=+2.435662861 container attach 265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a (image=quay.io/ceph/ceph:v19, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:51 compute-0 podman[88074]: 2026-01-26 09:41:51.444322045 +0000 UTC m=+0.035374887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:51 compute-0 podman[88074]: 2026-01-26 09:41:51.598022633 +0000 UTC m=+0.189075385 container create ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:41:51 compute-0 systemd[1]: Started libpod-conmon-ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77.scope.
Jan 26 09:41:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v124: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:41:51 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:41:51 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:51 compute-0 ceph-mgr[74755]: [progress WARNING root] Starting Global Recovery Event,63 pgs not in active + clean state
Jan 26 09:41:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 26 09:41:51 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4014478342' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 09:41:51 compute-0 podman[88074]: 2026-01-26 09:41:51.81746403 +0000 UTC m=+0.408516802 container init ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 09:41:51 compute-0 podman[88074]: 2026-01-26 09:41:51.828758489 +0000 UTC m=+0.419811281 container start ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_williamson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 09:41:51 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4014478342' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 09:41:51 compute-0 podman[88074]: 2026-01-26 09:41:51.83406352 +0000 UTC m=+0.425116312 container attach ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 09:41:51 compute-0 condescending_williamson[88109]: 167 167
Jan 26 09:41:51 compute-0 sleepy_babbage[88051]: 
Jan 26 09:41:51 compute-0 sleepy_babbage[88051]: [global]
Jan 26 09:41:51 compute-0 sleepy_babbage[88051]:         fsid = 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:51 compute-0 sleepy_babbage[88051]:         mon_host = 192.168.122.100
Jan 26 09:41:51 compute-0 systemd[1]: libpod-ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77.scope: Deactivated successfully.
Jan 26 09:41:51 compute-0 podman[88074]: 2026-01-26 09:41:51.839539835 +0000 UTC m=+0.430592627 container died ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_williamson, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:51 compute-0 systemd[1]: libpod-265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a.scope: Deactivated successfully.
Jan 26 09:41:51 compute-0 podman[87947]: 2026-01-26 09:41:51.878153317 +0000 UTC m=+2.996190807 container died 265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a (image=quay.io/ceph/ceph:v19, name=sleepy_babbage, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 09:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3c3f9f53991a140a3781de5eaec0572625b540deb94312392cb39c0533fea7d-merged.mount: Deactivated successfully.
Jan 26 09:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f264883e0179a184a2ecd1e731148fc132c8071477025360c6ac7c937b2ac513-merged.mount: Deactivated successfully.
Jan 26 09:41:51 compute-0 podman[88074]: 2026-01-26 09:41:51.922742227 +0000 UTC m=+0.513794979 container remove ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:51 compute-0 systemd[1]: libpod-conmon-ab6fe3c83fbb51bdb091395dd14a4891d7ee9b79b097747d354170fe86944a77.scope: Deactivated successfully.
Jan 26 09:41:51 compute-0 podman[87947]: 2026-01-26 09:41:51.942264914 +0000 UTC m=+3.060302394 container remove 265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a (image=quay.io/ceph/ceph:v19, name=sleepy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:41:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 26 09:41:51 compute-0 systemd[1]: libpod-conmon-265e3fc46768531d453f8bbeaf12d8fad698c2a87984d966b9444dc26b50629a.scope: Deactivated successfully.
Jan 26 09:41:51 compute-0 sudo[87944]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:52 compute-0 sudo[88187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjjnrliyeprcdaudkeeltbxekgrgifam ; /usr/bin/python3'
Jan 26 09:41:52 compute-0 sudo[88187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:52 compute-0 podman[88150]: 2026-01-26 09:41:52.043575104 +0000 UTC m=+0.020176345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:52 compute-0 python3[88189]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:52 compute-0 podman[88150]: 2026-01-26 09:41:52.381566109 +0000 UTC m=+0.358167350 container create 6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 26 09:41:52 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=39 pruub=12.198214531s) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active pruub 102.236122131s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:41:52 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 26 09:41:52 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=39 pruub=12.198214531s) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown pruub 102.236122131s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:52 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 942cf8b8-df44-4083-94b1-3a2019735f16 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 26 09:41:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:41:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:52 compute-0 ceph-mon[74456]: 2.1f scrub starts
Jan 26 09:41:52 compute-0 ceph-mon[74456]: 2.1f scrub ok
Jan 26 09:41:52 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:52 compute-0 ceph-mon[74456]: osdmap e38: 3 total, 3 up, 3 in
Jan 26 09:41:52 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:52 compute-0 ceph-mon[74456]: 3.1f scrub starts
Jan 26 09:41:52 compute-0 ceph-mon[74456]: 3.1f scrub ok
Jan 26 09:41:52 compute-0 ceph-mon[74456]: pgmap v124: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:52 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:52 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4014478342' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 09:41:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4014478342' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 09:41:52 compute-0 podman[88190]: 2026-01-26 09:41:52.439504523 +0000 UTC m=+0.193738649 container create 342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd (image=quay.io/ceph/ceph:v19, name=quizzical_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:52 compute-0 systemd[1]: Started libpod-conmon-6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924.scope.
Jan 26 09:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ca93e6e5de4ef4c31a42d8517f2f084a0349cbc85cd5c6c895e1558ccd6d4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ca93e6e5de4ef4c31a42d8517f2f084a0349cbc85cd5c6c895e1558ccd6d4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ca93e6e5de4ef4c31a42d8517f2f084a0349cbc85cd5c6c895e1558ccd6d4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ca93e6e5de4ef4c31a42d8517f2f084a0349cbc85cd5c6c895e1558ccd6d4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:52 compute-0 systemd[1]: Started libpod-conmon-342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd.scope.
Jan 26 09:41:52 compute-0 podman[88150]: 2026-01-26 09:41:52.494788475 +0000 UTC m=+0.471389706 container init 6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_rubin, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:52 compute-0 podman[88190]: 2026-01-26 09:41:52.416624866 +0000 UTC m=+0.170859002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec5ae12a214f2e3fcfd58a6d27d5a389a2cdda3cb20406903b4ef7f98b263fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec5ae12a214f2e3fcfd58a6d27d5a389a2cdda3cb20406903b4ef7f98b263fd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec5ae12a214f2e3fcfd58a6d27d5a389a2cdda3cb20406903b4ef7f98b263fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:52 compute-0 podman[88150]: 2026-01-26 09:41:52.511724233 +0000 UTC m=+0.488325454 container start 6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 26 09:41:52 compute-0 podman[88150]: 2026-01-26 09:41:52.624987031 +0000 UTC m=+0.601588252 container attach 6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:52 compute-0 podman[88190]: 2026-01-26 09:41:52.653261629 +0000 UTC m=+0.407495735 container init 342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd (image=quay.io/ceph/ceph:v19, name=quizzical_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:41:52 compute-0 podman[88190]: 2026-01-26 09:41:52.660684156 +0000 UTC m=+0.414918252 container start 342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd (image=quay.io/ceph/ceph:v19, name=quizzical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:41:52 compute-0 podman[88190]: 2026-01-26 09:41:52.664350263 +0000 UTC m=+0.418584369 container attach 342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd (image=quay.io/ceph/ceph:v19, name=quizzical_merkle, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:53 compute-0 lvm[88304]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:41:53 compute-0 lvm[88304]: VG ceph_vg0 finished
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1576342763' entity='client.admin' 
Jan 26 09:41:53 compute-0 quizzical_merkle[88210]: set ssl_option
Jan 26 09:41:53 compute-0 systemd[1]: libpod-342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd.scope: Deactivated successfully.
Jan 26 09:41:53 compute-0 podman[88190]: 2026-01-26 09:41:53.175306025 +0000 UTC m=+0.929540111 container died 342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd (image=quay.io/ceph/ceph:v19, name=quizzical_merkle, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 09:41:53 compute-0 quizzical_rubin[88205]: {}
Jan 26 09:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ec5ae12a214f2e3fcfd58a6d27d5a389a2cdda3cb20406903b4ef7f98b263fd-merged.mount: Deactivated successfully.
Jan 26 09:41:53 compute-0 systemd[1]: libpod-6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924.scope: Deactivated successfully.
Jan 26 09:41:53 compute-0 systemd[1]: libpod-6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924.scope: Consumed 1.137s CPU time.
Jan 26 09:41:53 compute-0 podman[88190]: 2026-01-26 09:41:53.22612041 +0000 UTC m=+0.980354486 container remove 342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd (image=quay.io/ceph/ceph:v19, name=quizzical_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:53 compute-0 podman[88150]: 2026-01-26 09:41:53.22726649 +0000 UTC m=+1.203867701 container died 6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Jan 26 09:41:53 compute-0 systemd[1]: libpod-conmon-342f0b284bc38b84842a6a419bfc9cede279303028295feec968aac489d6f8bd.scope: Deactivated successfully.
Jan 26 09:41:53 compute-0 sudo[88187]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0ca93e6e5de4ef4c31a42d8517f2f084a0349cbc85cd5c6c895e1558ccd6d4b-merged.mount: Deactivated successfully.
Jan 26 09:41:53 compute-0 sudo[88352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nacdfsgsbrijjiqtaykaufauffvzwgyv ; /usr/bin/python3'
Jan 26 09:41:53 compute-0 sudo[88352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 2653e1c1-2b29-40bb-9af7-09d1fb9f0187 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev ca0ab9be-c95d-49f2-971f-6cd171169440 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event ca0ab9be-c95d-49f2-971f-6cd171169440 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 84a59410-3f2a-4e3a-a1dc-042f4ebff04c (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 84a59410-3f2a-4e3a-a1dc-042f4ebff04c (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 04aa54ed-47d1-4631-96a3-061a5086b60c (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 04aa54ed-47d1-4631-96a3-061a5086b60c (PG autoscaler increasing pool 4 PGs from 1 to 32) in 4 seconds
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev b9a8182f-4d76-4a68-b749-9e9056527e68 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event b9a8182f-4d76-4a68-b749-9e9056527e68 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 26 09:41:53 compute-0 podman[88150]: 2026-01-26 09:41:53.412439722 +0000 UTC m=+1.389040933 container remove 6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 942cf8b8-df44-4083-94b1-3a2019735f16 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 942cf8b8-df44-4083-94b1-3a2019735f16 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 2653e1c1-2b29-40bb-9af7-09d1fb9f0187 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 2653e1c1-2b29-40bb-9af7-09d1fb9f0187 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=16/17 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-mon[74456]: 2.1d deep-scrub starts
Jan 26 09:41:53 compute-0 ceph-mon[74456]: 2.1d deep-scrub ok
Jan 26 09:41:53 compute-0 ceph-mon[74456]: 2.1e scrub starts
Jan 26 09:41:53 compute-0 ceph-mon[74456]: 2.1e scrub ok
Jan 26 09:41:53 compute-0 ceph-mon[74456]: 3.1d scrub starts
Jan 26 09:41:53 compute-0 ceph-mon[74456]: 3.1d scrub ok
Jan 26 09:41:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:53 compute-0 ceph-mon[74456]: osdmap e39: 3 total, 3 up, 3 in
Jan 26 09:41:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:41:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1576342763' entity='client.admin' 
Jan 26 09:41:53 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:41:53 compute-0 ceph-mon[74456]: osdmap e40: 3 total, 3 up, 3 in
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=39/40 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=16/16 les/c/f=17/17/0 sis=39) [0] r=0 lpr=39 pi=[16,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:53 compute-0 systemd[1]: libpod-conmon-6232524b527813ebfe5387584b8a4be6d70de50765f7fefe0c614e1973e03924.scope: Deactivated successfully.
Jan 26 09:41:53 compute-0 sudo[88005]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:53 compute-0 python3[88354]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:53 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 26 09:41:53 compute-0 sudo[88356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:41:53 compute-0 sudo[88356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:53 compute-0 sudo[88356]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:53 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v127: 131 pgs: 2 peering, 93 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:53 compute-0 podman[88379]: 2026-01-26 09:41:53.565879932 +0000 UTC m=+0.019529068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:41:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 09:41:53 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 09:41:53 compute-0 podman[88379]: 2026-01-26 09:41:53.696111399 +0000 UTC m=+0.149760515 container create 75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8 (image=quay.io/ceph/ceph:v19, name=lucid_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:53 compute-0 systemd[1]: Started libpod-conmon-75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8.scope.
Jan 26 09:41:53 compute-0 sudo[88392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:53 compute-0 sudo[88392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:53 compute-0 sudo[88392]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d9e88a8a8b90564845785c11747c16889c25c95ec6baf5d79a6d12b8b33c53/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d9e88a8a8b90564845785c11747c16889c25c95ec6baf5d79a6d12b8b33c53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d9e88a8a8b90564845785c11747c16889c25c95ec6baf5d79a6d12b8b33c53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:53 compute-0 podman[88379]: 2026-01-26 09:41:53.77213178 +0000 UTC m=+0.225780926 container init 75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8 (image=quay.io/ceph/ceph:v19, name=lucid_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:53 compute-0 podman[88379]: 2026-01-26 09:41:53.778799507 +0000 UTC m=+0.232448623 container start 75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8 (image=quay.io/ceph/ceph:v19, name=lucid_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:53 compute-0 podman[88379]: 2026-01-26 09:41:53.781852037 +0000 UTC m=+0.235501153 container attach 75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8 (image=quay.io/ceph/ceph:v19, name=lucid_agnesi, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:53 compute-0 sudo[88423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:53 compute-0 sudo[88423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:54 compute-0 podman[88484]: 2026-01-26 09:41:54.10309799 +0000 UTC m=+0.043089872 container create 51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:41:54 compute-0 systemd[1]: Started libpod-conmon-51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4.scope.
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 26 09:41:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:54 compute-0 podman[88484]: 2026-01-26 09:41:54.079161626 +0000 UTC m=+0.019153518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:54 compute-0 podman[88484]: 2026-01-26 09:41:54.198626348 +0000 UTC m=+0.138618240 container init 51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 podman[88484]: 2026-01-26 09:41:54.203715913 +0000 UTC m=+0.143707785 container start 51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:41:54 compute-0 lucid_agnesi[88419]: Scheduled rgw.rgw update...
Jan 26 09:41:54 compute-0 lucid_agnesi[88419]: Scheduled ingress.rgw.default update...
Jan 26 09:41:54 compute-0 clever_liskov[88501]: 167 167
Jan 26 09:41:54 compute-0 systemd[1]: libpod-51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4.scope: Deactivated successfully.
Jan 26 09:41:54 compute-0 podman[88484]: 2026-01-26 09:41:54.213911932 +0000 UTC m=+0.153903834 container attach 51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:54 compute-0 podman[88484]: 2026-01-26 09:41:54.214143968 +0000 UTC m=+0.154135840 container died 51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:54 compute-0 systemd[1]: libpod-75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8.scope: Deactivated successfully.
Jan 26 09:41:54 compute-0 podman[88379]: 2026-01-26 09:41:54.227215704 +0000 UTC m=+0.680864820 container died 75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8 (image=quay.io/ceph/ceph:v19, name=lucid_agnesi, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b154b6659ee41c3dbd7f825ba30f0a6d21d3942852d5529260111cb9c1be8b04-merged.mount: Deactivated successfully.
Jan 26 09:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7d9e88a8a8b90564845785c11747c16889c25c95ec6baf5d79a6d12b8b33c53-merged.mount: Deactivated successfully.
Jan 26 09:41:54 compute-0 podman[88484]: 2026-01-26 09:41:54.254165438 +0000 UTC m=+0.194157310 container remove 51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4 (image=quay.io/ceph/ceph:v19, name=clever_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:41:54 compute-0 systemd[1]: libpod-conmon-51edb22c0fcf2fa6ef1195179f9e7e29511e0153e8b05865746b165b14907da4.scope: Deactivated successfully.
Jan 26 09:41:54 compute-0 podman[88379]: 2026-01-26 09:41:54.275945734 +0000 UTC m=+0.729594840 container remove 75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8 (image=quay.io/ceph/ceph:v19, name=lucid_agnesi, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:54 compute-0 systemd[1]: libpod-conmon-75b0ab0523465969b7b6ddbe4a0bce90ca47e1a5bc6cded5ef4634741d79c5a8.scope: Deactivated successfully.
Jan 26 09:41:54 compute-0 sudo[88352]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:54 compute-0 sudo[88423]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.zllcia (monmap changed)...
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.zllcia (monmap changed)...
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.zllcia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zllcia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.zllcia on compute-0
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.zllcia on compute-0
Jan 26 09:41:54 compute-0 sudo[88532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:54 compute-0 sudo[88532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:54 compute-0 sudo[88532]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 2.9 deep-scrub starts
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 2.9 deep-scrub ok
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 3.1e scrub starts
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 3.1e scrub ok
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 4.1f scrub starts
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 4.1f scrub ok
Jan 26 09:41:54 compute-0 ceph-mon[74456]: pgmap v127: 131 pgs: 2 peering, 93 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zllcia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 3.b scrub starts
Jan 26 09:41:54 compute-0 ceph-mon[74456]: 3.b scrub ok
Jan 26 09:41:54 compute-0 sudo[88564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:54 compute-0 sudo[88564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 26 09:41:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41 pruub=9.607183456s) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active pruub 101.756835938s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:41:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41 pruub=9.607183456s) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown pruub 101.756835938s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:54 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Jan 26 09:41:54 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Jan 26 09:41:54 compute-0 python3[88657]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:41:54 compute-0 podman[88673]: 2026-01-26 09:41:54.764904015 +0000 UTC m=+0.045506196 container create b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66 (image=quay.io/ceph/ceph:v19, name=bold_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:54 compute-0 systemd[1]: Started libpod-conmon-b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66.scope.
Jan 26 09:41:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:54 compute-0 podman[88673]: 2026-01-26 09:41:54.8331365 +0000 UTC m=+0.113738681 container init b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66 (image=quay.io/ceph/ceph:v19, name=bold_varahamihira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 09:41:54 compute-0 podman[88673]: 2026-01-26 09:41:54.743004455 +0000 UTC m=+0.023606666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:54 compute-0 podman[88673]: 2026-01-26 09:41:54.840550927 +0000 UTC m=+0.121153108 container start b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66 (image=quay.io/ceph/ceph:v19, name=bold_varahamihira, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:41:54 compute-0 podman[88673]: 2026-01-26 09:41:54.843778222 +0000 UTC m=+0.124380403 container attach b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66 (image=quay.io/ceph/ceph:v19, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 09:41:54 compute-0 bold_varahamihira[88726]: 167 167
Jan 26 09:41:54 compute-0 systemd[1]: libpod-b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66.scope: Deactivated successfully.
Jan 26 09:41:54 compute-0 conmon[88726]: conmon b0ca6a90c65724b39ff5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66.scope/container/memory.events
Jan 26 09:41:54 compute-0 podman[88673]: 2026-01-26 09:41:54.846470953 +0000 UTC m=+0.127073134 container died b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66 (image=quay.io/ceph/ceph:v19, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0736f9b08588761b562cba42552304d1071b420238348c9be5f6732587329367-merged.mount: Deactivated successfully.
Jan 26 09:41:54 compute-0 podman[88673]: 2026-01-26 09:41:54.875043449 +0000 UTC m=+0.155645630 container remove b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66 (image=quay.io/ceph/ceph:v19, name=bold_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:54 compute-0 systemd[1]: libpod-conmon-b0ca6a90c65724b39ff569b5c67302bcc0611cca19e0b1150551c8930daabe66.scope: Deactivated successfully.
Jan 26 09:41:54 compute-0 sudo[88564]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:54 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 26 09:41:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 26 09:41:55 compute-0 sudo[88778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:55 compute-0 sudo[88778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:55 compute-0 python3[88773]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420514.4304674-37349-276393100857023/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:41:55 compute-0 sudo[88778]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:55 compute-0 sudo[88803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:55 compute-0 sudo[88803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:55 compute-0 podman[88868]: 2026-01-26 09:41:55.392401652 +0000 UTC m=+0.047639172 container create 23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:41:55 compute-0 systemd[1]: Started libpod-conmon-23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a.scope.
Jan 26 09:41:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:55 compute-0 podman[88868]: 2026-01-26 09:41:55.466859352 +0000 UTC m=+0.122096892 container init 23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:41:55 compute-0 podman[88868]: 2026-01-26 09:41:55.372965367 +0000 UTC m=+0.028202907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:55 compute-0 podman[88868]: 2026-01-26 09:41:55.472955813 +0000 UTC m=+0.128193333 container start 23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cray, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:41:55 compute-0 podman[88868]: 2026-01-26 09:41:55.476336853 +0000 UTC m=+0.131574393 container attach 23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:41:55 compute-0 goofy_cray[88884]: 167 167
Jan 26 09:41:55 compute-0 systemd[1]: libpod-23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a.scope: Deactivated successfully.
Jan 26 09:41:55 compute-0 podman[88868]: 2026-01-26 09:41:55.479470506 +0000 UTC m=+0.134708026 container died 23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cray, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 26 09:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-432ecd33dde6120f5e21c583d4d69b93b18bc163349c860ddc4e6c7db2cab9f9-merged.mount: Deactivated successfully.
Jan 26 09:41:55 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Jan 26 09:41:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 26 09:41:55 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 26 09:41:55 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Jan 26 09:41:55 compute-0 podman[88868]: 2026-01-26 09:41:55.515368185 +0000 UTC m=+0.170605705 container remove 23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:41:55 compute-0 sudo[88919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbhcvchqifsnisiwwbnvdgcemdagldlj ; /usr/bin/python3'
Jan 26 09:41:55 compute-0 sudo[88919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.16( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.18( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 42 pg[6.10( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [0] r=0 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:41:55 compute-0 ceph-mon[74456]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:41:55 compute-0 ceph-mon[74456]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 09:41:55 compute-0 ceph-mon[74456]: Saving service ingress.rgw.default spec with placement count:2
Jan 26 09:41:55 compute-0 ceph-mon[74456]: 2.8 deep-scrub starts
Jan 26 09:41:55 compute-0 ceph-mon[74456]: 2.8 deep-scrub ok
Jan 26 09:41:55 compute-0 ceph-mon[74456]: Reconfiguring mgr.compute-0.zllcia (monmap changed)...
Jan 26 09:41:55 compute-0 ceph-mon[74456]: Reconfiguring daemon mgr.compute-0.zllcia on compute-0
Jan 26 09:41:55 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:55 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:41:55 compute-0 ceph-mon[74456]: osdmap e41: 3 total, 3 up, 3 in
Jan 26 09:41:55 compute-0 ceph-mon[74456]: 4.7 deep-scrub starts
Jan 26 09:41:55 compute-0 ceph-mon[74456]: 4.7 deep-scrub ok
Jan 26 09:41:55 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:55 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:55 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:41:55 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:55 compute-0 ceph-mon[74456]: 3.a scrub starts
Jan 26 09:41:55 compute-0 systemd[1]: libpod-conmon-23df3de3a6112c6a749d3966918bc13b27824d6a9c57f3891a4d33482ec3197a.scope: Deactivated successfully.
Jan 26 09:41:55 compute-0 sudo[88803]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:41:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:41:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:55 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 26 09:41:55 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 26 09:41:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 26 09:41:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 09:41:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:55 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 26 09:41:55 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 26 09:41:55 compute-0 sudo[88926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:41:55 compute-0 sudo[88926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v130: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:55 compute-0 sudo[88926]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:55 compute-0 python3[88925]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:55 compute-0 sudo[88951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:41:55 compute-0 sudo[88951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:41:55 compute-0 podman[88968]: 2026-01-26 09:41:55.731477065 +0000 UTC m=+0.043440570 container create ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531 (image=quay.io/ceph/ceph:v19, name=exciting_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:41:55 compute-0 systemd[1]: Started libpod-conmon-ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531.scope.
Jan 26 09:41:55 compute-0 podman[88968]: 2026-01-26 09:41:55.713177401 +0000 UTC m=+0.025140896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c40825377e40ec13401c94d30b4c9d9e84eb43d7a290540113b739574dc805/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c40825377e40ec13401c94d30b4c9d9e84eb43d7a290540113b739574dc805/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c40825377e40ec13401c94d30b4c9d9e84eb43d7a290540113b739574dc805/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:55 compute-0 podman[88968]: 2026-01-26 09:41:55.834293966 +0000 UTC m=+0.146257461 container init ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531 (image=quay.io/ceph/ceph:v19, name=exciting_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 09:41:55 compute-0 podman[88968]: 2026-01-26 09:41:55.839993987 +0000 UTC m=+0.151957482 container start ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531 (image=quay.io/ceph/ceph:v19, name=exciting_keldysh, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 09:41:55 compute-0 podman[88968]: 2026-01-26 09:41:55.857279125 +0000 UTC m=+0.169242660 container attach ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531 (image=quay.io/ceph/ceph:v19, name=exciting_keldysh, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:41:56 compute-0 podman[89030]: 2026-01-26 09:41:56.011309861 +0000 UTC m=+0.035118260 container create b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 09:41:56 compute-0 systemd[1]: Started libpod-conmon-b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4.scope.
Jan 26 09:41:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:56 compute-0 podman[89030]: 2026-01-26 09:41:56.071412542 +0000 UTC m=+0.095220961 container init b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:56 compute-0 podman[89030]: 2026-01-26 09:41:56.077426961 +0000 UTC m=+0.101235360 container start b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:56 compute-0 sad_kalam[89046]: 167 167
Jan 26 09:41:56 compute-0 systemd[1]: libpod-b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4.scope: Deactivated successfully.
Jan 26 09:41:56 compute-0 podman[89030]: 2026-01-26 09:41:56.081305894 +0000 UTC m=+0.105114323 container attach b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:56 compute-0 podman[89030]: 2026-01-26 09:41:56.081581241 +0000 UTC m=+0.105389640 container died b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:56 compute-0 podman[89030]: 2026-01-26 09:41:55.996758806 +0000 UTC m=+0.020567225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:41:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f637983ff0573bcf1b4e20f1813df7e9f598caec4bb68057fa7f8d60fe25d3e4-merged.mount: Deactivated successfully.
Jan 26 09:41:56 compute-0 podman[89030]: 2026-01-26 09:41:56.112989242 +0000 UTC m=+0.136797631 container remove b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:41:56 compute-0 systemd[1]: libpod-conmon-b2745b0bd939a30a18094a0c964c1d0fb4e3d441e7cccf71d8d0b0bdda0597c4.scope: Deactivated successfully.
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service node-exporter spec with placement *
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 26 09:41:56 compute-0 sudo[88951]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 exciting_keldysh[88992]: Scheduled node-exporter update...
Jan 26 09:41:56 compute-0 exciting_keldysh[88992]: Scheduled grafana update...
Jan 26 09:41:56 compute-0 exciting_keldysh[88992]: Scheduled prometheus update...
Jan 26 09:41:56 compute-0 exciting_keldysh[88992]: Scheduled alertmanager update...
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 systemd[1]: libpod-ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531.scope: Deactivated successfully.
Jan 26 09:41:56 compute-0 podman[88968]: 2026-01-26 09:41:56.264461991 +0000 UTC m=+0.576425486 container died ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531 (image=quay.io/ceph/ceph:v19, name=exciting_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 26 09:41:56 compute-0 podman[88968]: 2026-01-26 09:41:56.294997578 +0000 UTC m=+0.606961073 container remove ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531 (image=quay.io/ceph/ceph:v19, name=exciting_keldysh, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 09:41:56 compute-0 systemd[1]: libpod-conmon-ff25be7af7bc1ab0b719cd3a55dd9d357d23696510a4ff6a6ec7a09198a4d531.scope: Deactivated successfully.
Jan 26 09:41:56 compute-0 sudo[88919]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-83c40825377e40ec13401c94d30b4c9d9e84eb43d7a290540113b739574dc805-merged.mount: Deactivated successfully.
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 26 09:41:56 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 26 09:41:56 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Jan 26 09:41:56 compute-0 ceph-mon[74456]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 26 09:41:56 compute-0 ceph-mon[74456]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 26 09:41:56 compute-0 ceph-mon[74456]: 2.1b scrub starts
Jan 26 09:41:56 compute-0 ceph-mon[74456]: 2.1b scrub ok
Jan 26 09:41:56 compute-0 ceph-mon[74456]: 3.a scrub ok
Jan 26 09:41:56 compute-0 ceph-mon[74456]: 4.b deep-scrub starts
Jan 26 09:41:56 compute-0 ceph-mon[74456]: osdmap e42: 3 total, 3 up, 3 in
Jan 26 09:41:56 compute-0 ceph-mon[74456]: 4.b deep-scrub ok
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: Reconfiguring osd.0 (monmap changed)...
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:56 compute-0 ceph-mon[74456]: Reconfiguring daemon osd.0 on compute-0
Jan 26 09:41:56 compute-0 ceph-mon[74456]: pgmap v130: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 09:41:56 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:56 compute-0 ceph-mon[74456]: 3.1c scrub starts
Jan 26 09:41:56 compute-0 ceph-mon[74456]: 3.1c scrub ok
Jan 26 09:41:56 compute-0 ceph-mon[74456]: osdmap e43: 3 total, 3 up, 3 in
Jan 26 09:41:56 compute-0 sudo[89105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljanalxabffpukkcxedbeshxkpvjftxp ; /usr/bin/python3'
Jan 26 09:41:56 compute-0 sudo[89105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:56 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 11 completed events
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:41:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:56 compute-0 python3[89107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:56 compute-0 podman[89108]: 2026-01-26 09:41:56.807680217 +0000 UTC m=+0.024728565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:41:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:41:57 compute-0 podman[89108]: 2026-01-26 09:41:57.28012319 +0000 UTC m=+0.497171548 container create 9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf (image=quay.io/ceph/ceph:v19, name=strange_cerf, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:41:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:41:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:57 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 26 09:41:57 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 26 09:41:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 26 09:41:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 09:41:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:57 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 26 09:41:57 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 26 09:41:57 compute-0 systemd[1]: Started libpod-conmon-9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf.scope.
Jan 26 09:41:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1030d2972d4a1c20c90fdbf0b0e3db08691067249885dcb40f491f7108a5ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1030d2972d4a1c20c90fdbf0b0e3db08691067249885dcb40f491f7108a5ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1030d2972d4a1c20c90fdbf0b0e3db08691067249885dcb40f491f7108a5ff/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:57 compute-0 podman[89108]: 2026-01-26 09:41:57.497129844 +0000 UTC m=+0.714178182 container init 9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf (image=quay.io/ceph/ceph:v19, name=strange_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:57 compute-0 podman[89108]: 2026-01-26 09:41:57.503921264 +0000 UTC m=+0.720969602 container start 9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf (image=quay.io/ceph/ceph:v19, name=strange_cerf, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Jan 26 09:41:57 compute-0 podman[89108]: 2026-01-26 09:41:57.507241592 +0000 UTC m=+0.724290030 container attach 9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf (image=quay.io/ceph/ceph:v19, name=strange_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:57 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 26 09:41:57 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 26 09:41:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v132: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Jan 26 09:41:57 compute-0 ceph-mon[74456]: 2.1 deep-scrub starts
Jan 26 09:41:57 compute-0 ceph-mon[74456]: 2.1 deep-scrub ok
Jan 26 09:41:57 compute-0 ceph-mon[74456]: from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:41:57 compute-0 ceph-mon[74456]: Saving service node-exporter spec with placement *
Jan 26 09:41:57 compute-0 ceph-mon[74456]: Saving service grafana spec with placement compute-0;count:1
Jan 26 09:41:57 compute-0 ceph-mon[74456]: Saving service prometheus spec with placement compute-0;count:1
Jan 26 09:41:57 compute-0 ceph-mon[74456]: Saving service alertmanager spec with placement compute-0;count:1
Jan 26 09:41:57 compute-0 ceph-mon[74456]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 26 09:41:57 compute-0 ceph-mon[74456]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 26 09:41:57 compute-0 ceph-mon[74456]: 4.6 deep-scrub starts
Jan 26 09:41:57 compute-0 ceph-mon[74456]: 4.6 deep-scrub ok
Jan 26 09:41:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 09:41:57 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/468906199' entity='client.admin' 
Jan 26 09:41:58 compute-0 systemd[1]: libpod-9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf.scope: Deactivated successfully.
Jan 26 09:41:58 compute-0 podman[89148]: 2026-01-26 09:41:58.085022223 +0000 UTC m=+0.031123935 container died 9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf (image=quay.io/ceph/ceph:v19, name=strange_cerf, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:41:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a1030d2972d4a1c20c90fdbf0b0e3db08691067249885dcb40f491f7108a5ff-merged.mount: Deactivated successfully.
Jan 26 09:41:58 compute-0 podman[89148]: 2026-01-26 09:41:58.120600814 +0000 UTC m=+0.066702506 container remove 9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf (image=quay.io/ceph/ceph:v19, name=strange_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 09:41:58 compute-0 systemd[1]: libpod-conmon-9a222dd1be8c7f2274f079f6c35bba5ce953697d0a0da0d9c8a72c0036a74bdf.scope: Deactivated successfully.
Jan 26 09:41:58 compute-0 sudo[89105]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:58 compute-0 sudo[89187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udefukbzopdzgmluglppzkuklcsloyxp ; /usr/bin/python3'
Jan 26 09:41:58 compute-0 sudo[89187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:41:58 compute-0 python3[89189]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:58 compute-0 podman[89190]: 2026-01-26 09:41:58.472057475 +0000 UTC m=+0.065263087 container create ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8 (image=quay.io/ceph/ceph:v19, name=admiring_panini, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:41:58 compute-0 podman[89190]: 2026-01-26 09:41:58.431365398 +0000 UTC m=+0.024571060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:58 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 26 09:41:58 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 26 09:41:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 26 09:41:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:41:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 26 09:41:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:41:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:41:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:58 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 26 09:41:58 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 26 09:41:58 compute-0 systemd[1]: Started libpod-conmon-ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8.scope.
Jan 26 09:41:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baac76c8fe20a2531008f599088c22bc74c11d39640c2d404b2595be36c7cd8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baac76c8fe20a2531008f599088c22bc74c11d39640c2d404b2595be36c7cd8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baac76c8fe20a2531008f599088c22bc74c11d39640c2d404b2595be36c7cd8e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:41:58 compute-0 podman[89190]: 2026-01-26 09:41:58.565310903 +0000 UTC m=+0.158516495 container init ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8 (image=quay.io/ceph/ceph:v19, name=admiring_panini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:41:58 compute-0 podman[89190]: 2026-01-26 09:41:58.572673318 +0000 UTC m=+0.165878910 container start ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8 (image=quay.io/ceph/ceph:v19, name=admiring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:41:58 compute-0 podman[89190]: 2026-01-26 09:41:58.577392063 +0000 UTC m=+0.170597665 container attach ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8 (image=quay.io/ceph/ceph:v19, name=admiring_panini, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:41:58 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 26 09:41:58 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 26 09:41:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Jan 26 09:41:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4184262969' entity='client.admin' 
Jan 26 09:41:58 compute-0 ceph-mon[74456]: 2.7 scrub starts
Jan 26 09:41:58 compute-0 ceph-mon[74456]: 2.7 scrub ok
Jan 26 09:41:58 compute-0 ceph-mon[74456]: 3.9 scrub starts
Jan 26 09:41:58 compute-0 ceph-mon[74456]: Reconfiguring osd.1 (monmap changed)...
Jan 26 09:41:58 compute-0 ceph-mon[74456]: 3.9 scrub ok
Jan 26 09:41:58 compute-0 ceph-mon[74456]: Reconfiguring daemon osd.1 on compute-1
Jan 26 09:41:58 compute-0 ceph-mon[74456]: 4.a scrub starts
Jan 26 09:41:58 compute-0 ceph-mon[74456]: 4.a scrub ok
Jan 26 09:41:58 compute-0 ceph-mon[74456]: pgmap v132: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/468906199' entity='client.admin' 
Jan 26 09:41:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:58 compute-0 ceph-mon[74456]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 26 09:41:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:41:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:41:58 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:41:58 compute-0 ceph-mon[74456]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 26 09:41:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4184262969' entity='client.admin' 
Jan 26 09:41:58 compute-0 systemd[1]: libpod-ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8.scope: Deactivated successfully.
Jan 26 09:41:58 compute-0 podman[89230]: 2026-01-26 09:41:58.982617247 +0000 UTC m=+0.021359146 container died ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8 (image=quay.io/ceph/ceph:v19, name=admiring_panini, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-baac76c8fe20a2531008f599088c22bc74c11d39640c2d404b2595be36c7cd8e-merged.mount: Deactivated successfully.
Jan 26 09:41:59 compute-0 podman[89230]: 2026-01-26 09:41:59.01372148 +0000 UTC m=+0.052463359 container remove ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8 (image=quay.io/ceph/ceph:v19, name=admiring_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:41:59 compute-0 systemd[1]: libpod-conmon-ade52141a7061493838d9e72cf0dd4ec9be47a658e7baf13093dfbc90b5798d8.scope: Deactivated successfully.
Jan 26 09:41:59 compute-0 sudo[89187]: pam_unix(sudo:session): session closed for user root
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:41:59 compute-0 sudo[89269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqzeasknxgxaveurgthhfwbobevjqexf ; /usr/bin/python3'
Jan 26 09:41:59 compute-0 sudo[89269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:41:59 compute-0 python3[89271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:41:59 compute-0 podman[89272]: 2026-01-26 09:41:59.350864113 +0000 UTC m=+0.022884376 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:41:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:41:59 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 26 09:41:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v133: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:41:59 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:41:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:41:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:41:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:41:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:41:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:41:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:41:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:42:00 compute-0 podman[89272]: 2026-01-26 09:42:00.193888938 +0000 UTC m=+0.865909181 container create 231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce (image=quay.io/ceph/ceph:v19, name=upbeat_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 26 09:42:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 26 09:42:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:42:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:42:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 26 09:42:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 2.4 scrub starts
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 2.4 scrub ok
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 3.4 scrub starts
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 3.4 scrub ok
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 4.1d scrub starts
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 4.1d scrub ok
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 2.1c scrub starts
Jan 26 09:42:00 compute-0 ceph-mon[74456]: 2.1c scrub ok
Jan 26 09:42:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:42:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:42:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:42:00 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:42:00 compute-0 systemd[1]: Started libpod-conmon-231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce.scope.
Jan 26 09:42:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b39823662d48f7660ac755ac6fc3009c0b46f7c6fe1ae998f4e2a1d9970a7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b39823662d48f7660ac755ac6fc3009c0b46f7c6fe1ae998f4e2a1d9970a7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b39823662d48f7660ac755ac6fc3009c0b46f7c6fe1ae998f4e2a1d9970a7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:00 compute-0 podman[89272]: 2026-01-26 09:42:00.648334648 +0000 UTC m=+1.320354961 container init 231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce (image=quay.io/ceph/ceph:v19, name=upbeat_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 09:42:00 compute-0 podman[89272]: 2026-01-26 09:42:00.657540899 +0000 UTC m=+1.329561142 container start 231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce (image=quay.io/ceph/ceph:v19, name=upbeat_raman, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 26 09:42:00 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 26 09:42:00 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 26 09:42:00 compute-0 podman[89272]: 2026-01-26 09:42:00.743646337 +0000 UTC m=+1.415666550 container attach 231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce (image=quay.io/ceph/ceph:v19, name=upbeat_raman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 26 09:42:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.19( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.1e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.e( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.6( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.1( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.a( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.2( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.4( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.4( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.6( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.b( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.10( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.17( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[2.1e( empty local-lis/les=0/0 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667634010s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170341492s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667607307s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170341492s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.585196495s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087966919s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.585161209s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087966919s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667820930s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170707703s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.15( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667800903s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170707703s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584911346s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087898254s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584876060s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087898254s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667539597s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170600891s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.17( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667525291s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170600891s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584676743s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087837219s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584671974s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087882996s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584641457s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087837219s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584658623s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087882996s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667206764s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170669556s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.12( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667189598s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170669556s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667144775s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170684814s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584216118s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087783813s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584207535s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087783813s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.667121887s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170684814s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584055901s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087738037s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584040642s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087738037s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584246635s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.088035583s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.584231377s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.088035583s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666954994s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170776367s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666943550s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170776367s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666790962s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170753479s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666779518s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170753479s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666647911s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170791626s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.583848953s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087997437s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666636467s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170791626s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666568756s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170768738s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.583818436s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087997437s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666550636s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170768738s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.583178520s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087501526s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666466713s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170845032s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.583165169s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087501526s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1b( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666452408s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170845032s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.583254814s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087692261s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.583238602s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087692261s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.578103065s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.082695007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.578083038s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.082695007s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.582915306s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087577820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.578037262s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.082702637s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.578019142s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.082702637s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.582899094s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087577820s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666169167s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170989990s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666143417s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170974731s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666127205s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170974731s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.666150093s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170989990s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.582602501s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087554932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665920258s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170890808s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.582589149s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087554932s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.19( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665905952s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170890808s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.577515602s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.082588196s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.577500343s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.082588196s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665670395s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170898438s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665656090s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170898438s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.577111244s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.082435608s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665587425s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170913696s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665555000s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170936584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665546417s) [1] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170913696s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1e( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665542603s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170936584s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.577088356s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.082435608s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.581949234s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087448120s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.581932068s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087448120s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.577031136s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.082656860s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.577018738s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.082656860s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665217400s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active pruub 109.170944214s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.576517105s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.082260132s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.576504707s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.082260132s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[6.1c( empty local-lis/les=41/42 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=44 pruub=10.665199280s) [2] r=-1 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 109.170944214s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.576501846s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.082618713s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.576487541s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.082618713s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.581339836s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active pruub 107.087715149s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:42:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=39/40 n=0 ec=39/16 lis/c=39/39 les/c/f=40/40/0 sis=44 pruub=8.581319809s) [2] r=-1 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.087715149s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:42:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Jan 26 09:42:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2724000136' entity='client.admin' 
Jan 26 09:42:01 compute-0 systemd[1]: libpod-231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce.scope: Deactivated successfully.
Jan 26 09:42:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:01 compute-0 podman[89272]: 2026-01-26 09:42:01.132512819 +0000 UTC m=+1.804533042 container died 231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce (image=quay.io/ceph/ceph:v19, name=upbeat_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 09:42:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8b39823662d48f7660ac755ac6fc3009c0b46f7c6fe1ae998f4e2a1d9970a7b-merged.mount: Deactivated successfully.
Jan 26 09:42:01 compute-0 podman[89272]: 2026-01-26 09:42:01.222600205 +0000 UTC m=+1.894620428 container remove 231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce (image=quay.io/ceph/ceph:v19, name=upbeat_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 26 09:42:01 compute-0 systemd[1]: libpod-conmon-231247f215f73d78552fdacd253c6df4b60a60ae52fdb66672da8f2ab1b68dce.scope: Deactivated successfully.
Jan 26 09:42:01 compute-0 sudo[89269]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:01 compute-0 ceph-mon[74456]: 3.8 deep-scrub starts
Jan 26 09:42:01 compute-0 ceph-mon[74456]: 3.8 deep-scrub ok
Jan 26 09:42:01 compute-0 ceph-mon[74456]: 4.1a scrub starts
Jan 26 09:42:01 compute-0 ceph-mon[74456]: pgmap v133: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:01 compute-0 ceph-mon[74456]: 4.1a scrub ok
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:42:01 compute-0 ceph-mon[74456]: 2.0 scrub starts
Jan 26 09:42:01 compute-0 ceph-mon[74456]: 2.0 scrub ok
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:01 compute-0 ceph-mon[74456]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:01 compute-0 ceph-mon[74456]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:42:01 compute-0 ceph-mon[74456]: osdmap e44: 3 total, 3 up, 3 in
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2724000136' entity='client.admin' 
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:01 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:01 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 26 09:42:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v135: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:01 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 26 09:42:01 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 65eed092-d585-4df6-a165-b953c53ed435 (Global Recovery Event) in 10 seconds
Jan 26 09:42:01 compute-0 sudo[89350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbalyvrwmhobtmsqvzrrntdqzbwbvxod ; /usr/bin/python3'
Jan 26 09:42:01 compute-0 sudo[89350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 26 09:42:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 26 09:42:01 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.1e( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.17( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.10( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.b( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.4( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.2( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.a( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.1( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.6( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.e( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.b( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[7.1e( empty local-lis/les=44/45 n=0 ec=41/24 lis/c=41/41 les/c/f=43/43/0 sis=44) [0] r=0 lpr=44 pi=[41,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.19( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=36/13 lis/c=36/36 les/c/f=37/37/0 sis=44) [0] r=0 lpr=44 pi=[36,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=37/14 lis/c=37/37 les/c/f=38/38/0 sis=44) [0] r=0 lpr=44 pi=[37,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=39/18 lis/c=39/39 les/c/f=40/40/0 sis=44) [0] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:01 compute-0 python3[89352]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:01 compute-0 sudo[89350]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:02 compute-0 sshd-session[89325]: Invalid user admin from 157.245.76.178 port 43866
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 sshd-session[89325]: Connection closed by invalid user admin 157.245.76.178 port 43866 [preauth]
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:02 compute-0 sudo[89389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fisgbxiomkyjawlrzjweyghrpcclhsry ; /usr/bin/python3'
Jan 26 09:42:02 compute-0 sudo[89389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:02 compute-0 sudo[89392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:02 compute-0 sudo[89392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:02 compute-0 sudo[89392]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:02 compute-0 python3[89391]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.zllcia/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 3.5 scrub starts
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 3.5 scrub ok
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 4.5 scrub starts
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 4.5 scrub ok
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 2.1a scrub starts
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 2.1a scrub ok
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 3.1b scrub starts
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 3.1b scrub ok
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 4.17 scrub starts
Jan 26 09:42:02 compute-0 ceph-mon[74456]: pgmap v135: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:02 compute-0 ceph-mon[74456]: 4.17 scrub ok
Jan 26 09:42:02 compute-0 ceph-mon[74456]: osdmap e45: 3 total, 3 up, 3 in
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:42:02 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:02 compute-0 sudo[89417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:42:02 compute-0 sudo[89417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:02 compute-0 podman[89440]: 2026-01-26 09:42:02.568966833 +0000 UTC m=+0.061207229 container create 802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3 (image=quay.io/ceph/ceph:v19, name=nifty_keldysh, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:02 compute-0 systemd[1]: Started libpod-conmon-802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3.scope.
Jan 26 09:42:02 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 26 09:42:02 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 26 09:42:02 compute-0 podman[89440]: 2026-01-26 09:42:02.550713605 +0000 UTC m=+0.042954031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f919352dd2151274b2255353ccadfe234a8730bada65b880f6e3fe59f9157773/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f919352dd2151274b2255353ccadfe234a8730bada65b880f6e3fe59f9157773/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f919352dd2151274b2255353ccadfe234a8730bada65b880f6e3fe59f9157773/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:02 compute-0 podman[89440]: 2026-01-26 09:42:02.664760115 +0000 UTC m=+0.157000521 container init 802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3 (image=quay.io/ceph/ceph:v19, name=nifty_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:02 compute-0 podman[89440]: 2026-01-26 09:42:02.672577018 +0000 UTC m=+0.164817404 container start 802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3 (image=quay.io/ceph/ceph:v19, name=nifty_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:02 compute-0 podman[89440]: 2026-01-26 09:42:02.676671189 +0000 UTC m=+0.168911575 container attach 802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3 (image=quay.io/ceph/ceph:v19, name=nifty_keldysh, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:02 compute-0 podman[89520]: 2026-01-26 09:42:02.851580138 +0000 UTC m=+0.040385952 container create 40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:02 compute-0 systemd[1]: Started libpod-conmon-40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9.scope.
Jan 26 09:42:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:02 compute-0 podman[89520]: 2026-01-26 09:42:02.830348619 +0000 UTC m=+0.019154433 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:02 compute-0 podman[89520]: 2026-01-26 09:42:02.92867308 +0000 UTC m=+0.117478974 container init 40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:02 compute-0 podman[89520]: 2026-01-26 09:42:02.938359744 +0000 UTC m=+0.127165548 container start 40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:42:02 compute-0 charming_carson[89536]: 167 167
Jan 26 09:42:02 compute-0 systemd[1]: libpod-40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9.scope: Deactivated successfully.
Jan 26 09:42:02 compute-0 podman[89520]: 2026-01-26 09:42:02.942809975 +0000 UTC m=+0.131615819 container attach 40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:02 compute-0 conmon[89536]: conmon 40ce2edbc0ad5fcf985f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9.scope/container/memory.events
Jan 26 09:42:02 compute-0 podman[89520]: 2026-01-26 09:42:02.94406571 +0000 UTC m=+0.132871524 container died 40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_carson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:42:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a772e2a12140d888d76b82bd8b5ef95c64f3bb7087b24f30cbe6cc2ee5a03098-merged.mount: Deactivated successfully.
Jan 26 09:42:02 compute-0 podman[89520]: 2026-01-26 09:42:02.979480245 +0000 UTC m=+0.168286039 container remove 40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:02 compute-0 systemd[1]: libpod-conmon-40ce2edbc0ad5fcf985f5edfd98ae57760f2d98a115541bea5579072fe710be9.scope: Deactivated successfully.
Jan 26 09:42:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.zllcia/server_addr}] v 0)
Jan 26 09:42:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/308369494' entity='client.admin' 
Jan 26 09:42:03 compute-0 systemd[1]: libpod-802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3.scope: Deactivated successfully.
Jan 26 09:42:03 compute-0 conmon[89457]: conmon 802fffc71f74efaec718 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3.scope/container/memory.events
Jan 26 09:42:03 compute-0 podman[89440]: 2026-01-26 09:42:03.038881015 +0000 UTC m=+0.531121411 container died 802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3 (image=quay.io/ceph/ceph:v19, name=nifty_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f919352dd2151274b2255353ccadfe234a8730bada65b880f6e3fe59f9157773-merged.mount: Deactivated successfully.
Jan 26 09:42:03 compute-0 podman[89440]: 2026-01-26 09:42:03.07538342 +0000 UTC m=+0.567623826 container remove 802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3 (image=quay.io/ceph/ceph:v19, name=nifty_keldysh, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:03 compute-0 systemd[1]: libpod-conmon-802fffc71f74efaec7184572a34335e46bd709d29a079075ff77d6570d85d8b3.scope: Deactivated successfully.
Jan 26 09:42:03 compute-0 sudo[89389]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:03 compute-0 podman[89570]: 2026-01-26 09:42:03.156603315 +0000 UTC m=+0.043449836 container create 4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 09:42:03 compute-0 systemd[1]: Started libpod-conmon-4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b.scope.
Jan 26 09:42:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef85ec650475d9266ff6288f83fadceb265e3010610cf27afd88c4a64fbaac5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef85ec650475d9266ff6288f83fadceb265e3010610cf27afd88c4a64fbaac5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef85ec650475d9266ff6288f83fadceb265e3010610cf27afd88c4a64fbaac5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef85ec650475d9266ff6288f83fadceb265e3010610cf27afd88c4a64fbaac5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef85ec650475d9266ff6288f83fadceb265e3010610cf27afd88c4a64fbaac5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:03 compute-0 podman[89570]: 2026-01-26 09:42:03.226366377 +0000 UTC m=+0.113212918 container init 4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 09:42:03 compute-0 podman[89570]: 2026-01-26 09:42:03.232069992 +0000 UTC m=+0.118916513 container start 4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:03 compute-0 podman[89570]: 2026-01-26 09:42:03.140229058 +0000 UTC m=+0.027075599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:03 compute-0 podman[89570]: 2026-01-26 09:42:03.23675919 +0000 UTC m=+0.123605711 container attach 4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 09:42:03 compute-0 ceph-mon[74456]: 7.1c scrub starts
Jan 26 09:42:03 compute-0 ceph-mon[74456]: 7.1c scrub ok
Jan 26 09:42:03 compute-0 ceph-mon[74456]: 3.1a scrub starts
Jan 26 09:42:03 compute-0 ceph-mon[74456]: 3.1a scrub ok
Jan 26 09:42:03 compute-0 ceph-mon[74456]: 4.16 scrub starts
Jan 26 09:42:03 compute-0 ceph-mon[74456]: 4.16 scrub ok
Jan 26 09:42:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/308369494' entity='client.admin' 
Jan 26 09:42:03 compute-0 quizzical_mccarthy[89586]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:42:03 compute-0 quizzical_mccarthy[89586]: --> All data devices are unavailable
Jan 26 09:42:03 compute-0 systemd[1]: libpod-4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b.scope: Deactivated successfully.
Jan 26 09:42:03 compute-0 podman[89570]: 2026-01-26 09:42:03.546460753 +0000 UTC m=+0.433307274 container died 4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mccarthy, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 09:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bef85ec650475d9266ff6288f83fadceb265e3010610cf27afd88c4a64fbaac5-merged.mount: Deactivated successfully.
Jan 26 09:42:03 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 26 09:42:03 compute-0 podman[89570]: 2026-01-26 09:42:03.582376533 +0000 UTC m=+0.469223054 container remove 4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:42:03 compute-0 systemd[1]: libpod-conmon-4953dec228ed7ba488ed2041433350aae5552981887d70ce04cf568500d1199b.scope: Deactivated successfully.
Jan 26 09:42:03 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 26 09:42:03 compute-0 sudo[89417]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v137: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:03 compute-0 sudo[89613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:03 compute-0 sudo[89613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:03 compute-0 sudo[89613]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:03 compute-0 sudo[89638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:42:03 compute-0 sudo[89638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:03 compute-0 sudo[89686]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppcvqxrnnxknswrkbqrqdkoalnsgqbij ; /usr/bin/python3'
Jan 26 09:42:03 compute-0 sudo[89686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:03 compute-0 python3[89688]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.xammti/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:03 compute-0 podman[89713]: 2026-01-26 09:42:03.994083308 +0000 UTC m=+0.039564320 container create ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60 (image=quay.io/ceph/ceph:v19, name=mystifying_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:42:04 compute-0 systemd[1]: Started libpod-conmon-ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60.scope.
Jan 26 09:42:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:04 compute-0 podman[89713]: 2026-01-26 09:42:03.975616864 +0000 UTC m=+0.021097906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab3649987f3b3f0a830b3205afccbdc5517d2f9882c7d5ef51fabb5787ba11f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab3649987f3b3f0a830b3205afccbdc5517d2f9882c7d5ef51fabb5787ba11f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab3649987f3b3f0a830b3205afccbdc5517d2f9882c7d5ef51fabb5787ba11f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:04 compute-0 podman[89713]: 2026-01-26 09:42:04.088351368 +0000 UTC m=+0.133832390 container init ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60 (image=quay.io/ceph/ceph:v19, name=mystifying_mcnulty, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:42:04 compute-0 podman[89713]: 2026-01-26 09:42:04.103399108 +0000 UTC m=+0.148880140 container start ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60 (image=quay.io/ceph/ceph:v19, name=mystifying_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:04 compute-0 podman[89713]: 2026-01-26 09:42:04.117553534 +0000 UTC m=+0.163034566 container attach ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60 (image=quay.io/ceph/ceph:v19, name=mystifying_mcnulty, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:04 compute-0 podman[89747]: 2026-01-26 09:42:04.176345807 +0000 UTC m=+0.034343607 container create 17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 26 09:42:04 compute-0 systemd[1]: Started libpod-conmon-17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7.scope.
Jan 26 09:42:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:04 compute-0 podman[89747]: 2026-01-26 09:42:04.236042015 +0000 UTC m=+0.094039865 container init 17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:04 compute-0 podman[89747]: 2026-01-26 09:42:04.241391571 +0000 UTC m=+0.099389371 container start 17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:04 compute-0 peaceful_aryabhata[89773]: 167 167
Jan 26 09:42:04 compute-0 systemd[1]: libpod-17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7.scope: Deactivated successfully.
Jan 26 09:42:04 compute-0 podman[89747]: 2026-01-26 09:42:04.245028119 +0000 UTC m=+0.103025969 container attach 17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:04 compute-0 podman[89747]: 2026-01-26 09:42:04.245312238 +0000 UTC m=+0.103310068 container died 17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:04 compute-0 podman[89747]: 2026-01-26 09:42:04.161815741 +0000 UTC m=+0.019813561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a770e58eef62c986ba0c097d0389c930edf0c3e02008c1e3523f22e7dbdb44cd-merged.mount: Deactivated successfully.
Jan 26 09:42:04 compute-0 podman[89747]: 2026-01-26 09:42:04.281236557 +0000 UTC m=+0.139234357 container remove 17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:42:04 compute-0 systemd[1]: libpod-conmon-17e53d9bda98c1b6db7138857e49ff417533fd046bd64ba9c6d447c0849288e7.scope: Deactivated successfully.
Jan 26 09:42:04 compute-0 podman[89806]: 2026-01-26 09:42:04.432901452 +0000 UTC m=+0.044723980 container create 392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:42:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.xammti/server_addr}] v 0)
Jan 26 09:42:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1533391079' entity='client.admin' 
Jan 26 09:42:04 compute-0 systemd[1]: Started libpod-conmon-392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173.scope.
Jan 26 09:42:04 compute-0 systemd[1]: libpod-ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60.scope: Deactivated successfully.
Jan 26 09:42:04 compute-0 podman[89713]: 2026-01-26 09:42:04.468165984 +0000 UTC m=+0.513646996 container died ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60 (image=quay.io/ceph/ceph:v19, name=mystifying_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 09:42:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1140981a3b1b66e9f1200b2d9dfa4a056be9f8691d81601466ebf48ba19e29ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1140981a3b1b66e9f1200b2d9dfa4a056be9f8691d81601466ebf48ba19e29ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1140981a3b1b66e9f1200b2d9dfa4a056be9f8691d81601466ebf48ba19e29ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1140981a3b1b66e9f1200b2d9dfa4a056be9f8691d81601466ebf48ba19e29ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:04 compute-0 podman[89806]: 2026-01-26 09:42:04.408010854 +0000 UTC m=+0.019833392 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:04 compute-0 podman[89806]: 2026-01-26 09:42:04.509630104 +0000 UTC m=+0.121452662 container init 392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:42:04 compute-0 ceph-mon[74456]: 2.17 scrub starts
Jan 26 09:42:04 compute-0 ceph-mon[74456]: 2.17 scrub ok
Jan 26 09:42:04 compute-0 ceph-mon[74456]: 3.15 scrub starts
Jan 26 09:42:04 compute-0 ceph-mon[74456]: 3.15 scrub ok
Jan 26 09:42:04 compute-0 ceph-mon[74456]: 6.14 scrub starts
Jan 26 09:42:04 compute-0 ceph-mon[74456]: 6.14 scrub ok
Jan 26 09:42:04 compute-0 ceph-mon[74456]: pgmap v137: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1533391079' entity='client.admin' 
Jan 26 09:42:04 compute-0 podman[89806]: 2026-01-26 09:42:04.519354669 +0000 UTC m=+0.131177197 container start 392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:04 compute-0 podman[89806]: 2026-01-26 09:42:04.522080693 +0000 UTC m=+0.133903211 container attach 392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_turing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:04 compute-0 podman[89713]: 2026-01-26 09:42:04.528950911 +0000 UTC m=+0.574431933 container remove ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60 (image=quay.io/ceph/ceph:v19, name=mystifying_mcnulty, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 09:42:04 compute-0 systemd[1]: libpod-conmon-ed3c1edb6300d77d8f778cef9afd2ed40dc731e55c35fa17f3e6e54687ed8d60.scope: Deactivated successfully.
Jan 26 09:42:04 compute-0 sudo[89686]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab3649987f3b3f0a830b3205afccbdc5517d2f9882c7d5ef51fabb5787ba11f-merged.mount: Deactivated successfully.
Jan 26 09:42:04 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 26 09:42:04 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 26 09:42:04 compute-0 bold_turing[89824]: {
Jan 26 09:42:04 compute-0 bold_turing[89824]:     "0": [
Jan 26 09:42:04 compute-0 bold_turing[89824]:         {
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "devices": [
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "/dev/loop3"
Jan 26 09:42:04 compute-0 bold_turing[89824]:             ],
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "lv_name": "ceph_lv0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "lv_size": "21470642176",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "name": "ceph_lv0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "tags": {
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.cluster_name": "ceph",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.crush_device_class": "",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.encrypted": "0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.osd_id": "0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.type": "block",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.vdo": "0",
Jan 26 09:42:04 compute-0 bold_turing[89824]:                 "ceph.with_tpm": "0"
Jan 26 09:42:04 compute-0 bold_turing[89824]:             },
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "type": "block",
Jan 26 09:42:04 compute-0 bold_turing[89824]:             "vg_name": "ceph_vg0"
Jan 26 09:42:04 compute-0 bold_turing[89824]:         }
Jan 26 09:42:04 compute-0 bold_turing[89824]:     ]
Jan 26 09:42:04 compute-0 bold_turing[89824]: }
Jan 26 09:42:04 compute-0 systemd[1]: libpod-392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173.scope: Deactivated successfully.
Jan 26 09:42:04 compute-0 podman[89806]: 2026-01-26 09:42:04.805632755 +0000 UTC m=+0.417455293 container died 392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_turing, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1140981a3b1b66e9f1200b2d9dfa4a056be9f8691d81601466ebf48ba19e29ae-merged.mount: Deactivated successfully.
Jan 26 09:42:04 compute-0 podman[89806]: 2026-01-26 09:42:04.844209666 +0000 UTC m=+0.456032194 container remove 392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_turing, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:42:04 compute-0 systemd[1]: libpod-conmon-392aec05e1f6e0b39c132e3b2cbf65eda56fb0c6deb959087ce573a251549173.scope: Deactivated successfully.
Jan 26 09:42:04 compute-0 sudo[89638]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:04 compute-0 sudo[89857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:04 compute-0 sudo[89857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:04 compute-0 sudo[89857]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:04 compute-0 sudo[89882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:42:04 compute-0 sudo[89882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:05 compute-0 sudo[89966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqdpbjtxkpjuywzbkawekoiweszlybik ; /usr/bin/python3'
Jan 26 09:42:05 compute-0 sudo[89966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:05 compute-0 podman[89969]: 2026-01-26 09:42:05.396830443 +0000 UTC m=+0.101045806 container create a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_keldysh, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:05 compute-0 python3[89968]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.oynaeu/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:05 compute-0 podman[89969]: 2026-01-26 09:42:05.316830592 +0000 UTC m=+0.021045975 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:05 compute-0 systemd[1]: Started libpod-conmon-a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529.scope.
Jan 26 09:42:05 compute-0 podman[89983]: 2026-01-26 09:42:05.466750269 +0000 UTC m=+0.043667091 container create 041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 09:42:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:05 compute-0 podman[89969]: 2026-01-26 09:42:05.486128848 +0000 UTC m=+0.190344221 container init a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:05 compute-0 systemd[1]: Started libpod-conmon-041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b.scope.
Jan 26 09:42:05 compute-0 podman[89969]: 2026-01-26 09:42:05.492795849 +0000 UTC m=+0.197011212 container start a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:42:05 compute-0 podman[89969]: 2026-01-26 09:42:05.495524313 +0000 UTC m=+0.199739676 container attach a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:05 compute-0 silly_keldysh[89998]: 167 167
Jan 26 09:42:05 compute-0 systemd[1]: libpod-a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529.scope: Deactivated successfully.
Jan 26 09:42:05 compute-0 podman[89969]: 2026-01-26 09:42:05.496688525 +0000 UTC m=+0.200903888 container died a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_keldysh, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b859bdbdd6c6ade953ef04cfd52e7b06f53ae861e6bf8031b80d18578dfe5e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b859bdbdd6c6ade953ef04cfd52e7b06f53ae861e6bf8031b80d18578dfe5e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b859bdbdd6c6ade953ef04cfd52e7b06f53ae861e6bf8031b80d18578dfe5e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:05 compute-0 podman[89983]: 2026-01-26 09:42:05.51738054 +0000 UTC m=+0.094297362 container init 041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b (image=quay.io/ceph/ceph:v19, name=intelligent_wu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:42:05 compute-0 podman[89983]: 2026-01-26 09:42:05.522593622 +0000 UTC m=+0.099510434 container start 041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d837b2ee8cd202b84369034eed1de2dbf23dec02ec21ee0f696f14ffad895397-merged.mount: Deactivated successfully.
Jan 26 09:42:05 compute-0 podman[89983]: 2026-01-26 09:42:05.530038815 +0000 UTC m=+0.106955647 container attach 041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:05 compute-0 ceph-mon[74456]: 7.12 scrub starts
Jan 26 09:42:05 compute-0 ceph-mon[74456]: 7.12 scrub ok
Jan 26 09:42:05 compute-0 ceph-mon[74456]: 5.13 scrub starts
Jan 26 09:42:05 compute-0 ceph-mon[74456]: 5.13 scrub ok
Jan 26 09:42:05 compute-0 ceph-mon[74456]: 6.16 scrub starts
Jan 26 09:42:05 compute-0 ceph-mon[74456]: 6.16 scrub ok
Jan 26 09:42:05 compute-0 podman[89983]: 2026-01-26 09:42:05.448313917 +0000 UTC m=+0.025230749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:05 compute-0 podman[89969]: 2026-01-26 09:42:05.549021583 +0000 UTC m=+0.253236946 container remove a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 09:42:05 compute-0 systemd[1]: libpod-conmon-a6419617a00f5fd57dd4d408159b84848f580c93e3c97153256e161630614529.scope: Deactivated successfully.
Jan 26 09:42:05 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 26 09:42:05 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 26 09:42:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v138: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:05 compute-0 podman[90037]: 2026-01-26 09:42:05.675121371 +0000 UTC m=+0.017316114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.oynaeu/server_addr}] v 0)
Jan 26 09:42:06 compute-0 podman[90037]: 2026-01-26 09:42:06.158947331 +0000 UTC m=+0.501142084 container create 66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1831666463' entity='client.admin' 
Jan 26 09:42:06 compute-0 systemd[1]: libpod-041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b.scope: Deactivated successfully.
Jan 26 09:42:06 compute-0 podman[89983]: 2026-01-26 09:42:06.387627797 +0000 UTC m=+0.964544609 container died 041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b (image=quay.io/ceph/ceph:v19, name=intelligent_wu, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:06 compute-0 systemd[1]: Started libpod-conmon-66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa.scope.
Jan 26 09:42:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070b67371d51e1801f7d867e89daa25672658a9dfaddecbe03e18152cb396d7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070b67371d51e1801f7d867e89daa25672658a9dfaddecbe03e18152cb396d7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070b67371d51e1801f7d867e89daa25672658a9dfaddecbe03e18152cb396d7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070b67371d51e1801f7d867e89daa25672658a9dfaddecbe03e18152cb396d7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:06 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 26 09:42:06 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 26 09:42:06 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 12 completed events
Jan 26 09:42:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:42:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-62b859bdbdd6c6ade953ef04cfd52e7b06f53ae861e6bf8031b80d18578dfe5e-merged.mount: Deactivated successfully.
Jan 26 09:42:07 compute-0 ceph-mon[74456]: 2.16 scrub starts
Jan 26 09:42:07 compute-0 ceph-mon[74456]: 2.16 scrub ok
Jan 26 09:42:07 compute-0 ceph-mon[74456]: 5.12 scrub starts
Jan 26 09:42:07 compute-0 ceph-mon[74456]: 5.12 scrub ok
Jan 26 09:42:07 compute-0 ceph-mon[74456]: 4.12 scrub starts
Jan 26 09:42:07 compute-0 ceph-mon[74456]: 4.12 scrub ok
Jan 26 09:42:07 compute-0 ceph-mon[74456]: pgmap v138: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:07 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1831666463' entity='client.admin' 
Jan 26 09:42:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:07 compute-0 podman[90062]: 2026-01-26 09:42:07.404619273 +0000 UTC m=+1.081364622 container remove 041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b (image=quay.io/ceph/ceph:v19, name=intelligent_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:42:07 compute-0 systemd[1]: libpod-conmon-041b9bad39beb574c961bcb9c7c106291db78bafd943059f7ef8abd56644139b.scope: Deactivated successfully.
Jan 26 09:42:07 compute-0 sudo[89966]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:07 compute-0 podman[90037]: 2026-01-26 09:42:07.476170685 +0000 UTC m=+1.818365428 container init 66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 09:42:07 compute-0 podman[90037]: 2026-01-26 09:42:07.482813686 +0000 UTC m=+1.825008419 container start 66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:07 compute-0 podman[90037]: 2026-01-26 09:42:07.486022563 +0000 UTC m=+1.828217286 container attach 66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:42:07 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Jan 26 09:42:07 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Jan 26 09:42:07 compute-0 sudo[90107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjajqhpotubjwkadfufbxliehfyelvf ; /usr/bin/python3'
Jan 26 09:42:07 compute-0 sudo[90107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v139: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:07 compute-0 python3[90109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:07 compute-0 podman[90126]: 2026-01-26 09:42:07.817127621 +0000 UTC m=+0.043871138 container create bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 26 09:42:07 compute-0 systemd[1]: Started libpod-conmon-bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb.scope.
Jan 26 09:42:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213007ab93144ff3a3b435aa21dc4f99779cd8f39445e5f0f44801c031c19416/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213007ab93144ff3a3b435aa21dc4f99779cd8f39445e5f0f44801c031c19416/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213007ab93144ff3a3b435aa21dc4f99779cd8f39445e5f0f44801c031c19416/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:07 compute-0 podman[90126]: 2026-01-26 09:42:07.889828322 +0000 UTC m=+0.116571869 container init bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:42:07 compute-0 podman[90126]: 2026-01-26 09:42:07.799553022 +0000 UTC m=+0.026296559 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:07 compute-0 podman[90126]: 2026-01-26 09:42:07.897634056 +0000 UTC m=+0.124377573 container start bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:07 compute-0 podman[90126]: 2026-01-26 09:42:07.901918022 +0000 UTC m=+0.128661539 container attach bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 2.14 scrub starts
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 2.14 scrub ok
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 3.11 deep-scrub starts
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 3.11 deep-scrub ok
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 6.10 scrub starts
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 6.10 scrub ok
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 7.17 scrub starts
Jan 26 09:42:08 compute-0 ceph-mon[74456]: 7.17 scrub ok
Jan 26 09:42:08 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:08 compute-0 ceph-mon[74456]: pgmap v139: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:08 compute-0 lvm[90217]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:42:08 compute-0 lvm[90217]: VG ceph_vg0 finished
Jan 26 09:42:08 compute-0 tender_darwin[90077]: {}
Jan 26 09:42:08 compute-0 systemd[1]: libpod-66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa.scope: Deactivated successfully.
Jan 26 09:42:08 compute-0 systemd[1]: libpod-66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa.scope: Consumed 1.259s CPU time.
Jan 26 09:42:08 compute-0 conmon[90077]: conmon 66717f94f9a983059503 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa.scope/container/memory.events
Jan 26 09:42:08 compute-0 podman[90037]: 2026-01-26 09:42:08.276800613 +0000 UTC m=+2.618995366 container died 66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:42:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-070b67371d51e1801f7d867e89daa25672658a9dfaddecbe03e18152cb396d7d-merged.mount: Deactivated successfully.
Jan 26 09:42:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 26 09:42:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1606551457' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 26 09:42:08 compute-0 podman[90037]: 2026-01-26 09:42:08.340675884 +0000 UTC m=+2.682870607 container remove 66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:08 compute-0 systemd[1]: libpod-conmon-66717f94f9a983059503d3a4349ff2260de0a78cbbc4a7656a72b792fb8e1faa.scope: Deactivated successfully.
Jan 26 09:42:08 compute-0 sudo[89882]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:08 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev e0287345-1ca9-4cc9-8af8-5afa30580710 (Updating rgw.rgw deployment (+3 -> 3))
Jan 26 09:42:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fgzdbm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 26 09:42:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fgzdbm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:42:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fgzdbm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:42:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 26 09:42:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:08 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.fgzdbm on compute-2
Jan 26 09:42:08 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.fgzdbm on compute-2
Jan 26 09:42:08 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Jan 26 09:42:08 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Jan 26 09:42:09 compute-0 ceph-mon[74456]: 3.e scrub starts
Jan 26 09:42:09 compute-0 ceph-mon[74456]: 3.e scrub ok
Jan 26 09:42:09 compute-0 ceph-mon[74456]: 4.11 deep-scrub starts
Jan 26 09:42:09 compute-0 ceph-mon[74456]: 4.11 deep-scrub ok
Jan 26 09:42:09 compute-0 ceph-mon[74456]: 2.11 scrub starts
Jan 26 09:42:09 compute-0 ceph-mon[74456]: 2.11 scrub ok
Jan 26 09:42:09 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1606551457' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 26 09:42:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fgzdbm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:42:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fgzdbm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:42:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:09 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:09 compute-0 ceph-mon[74456]: Deploying daemon rgw.rgw.compute-2.fgzdbm on compute-2
Jan 26 09:42:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1606551457' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 26 09:42:09 compute-0 quizzical_mclean[90151]: module 'dashboard' is already disabled
Jan 26 09:42:09 compute-0 systemd[1]: libpod-bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb.scope: Deactivated successfully.
Jan 26 09:42:09 compute-0 podman[90126]: 2026-01-26 09:42:09.135562486 +0000 UTC m=+1.362305993 container died bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 09:42:09 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.zllcia(active, since 3m), standbys: compute-2.oynaeu, compute-1.xammti
Jan 26 09:42:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-213007ab93144ff3a3b435aa21dc4f99779cd8f39445e5f0f44801c031c19416-merged.mount: Deactivated successfully.
Jan 26 09:42:09 compute-0 podman[90126]: 2026-01-26 09:42:09.174018646 +0000 UTC m=+1.400762163 container remove bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb (image=quay.io/ceph/ceph:v19, name=quizzical_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 09:42:09 compute-0 systemd[1]: libpod-conmon-bbd59bcba0651fc77bdbb859212f640f275ae38dbbb4c09277598f347a17fcdb.scope: Deactivated successfully.
Jan 26 09:42:09 compute-0 sudo[90107]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:09 compute-0 sudo[90268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymyqccwgkbvdppkmgoqpbuewnoczsont ; /usr/bin/python3'
Jan 26 09:42:09 compute-0 sudo[90268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:09 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 26 09:42:09 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 26 09:42:09 compute-0 python3[90270]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v140: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:09 compute-0 podman[90271]: 2026-01-26 09:42:09.650463785 +0000 UTC m=+0.047079735 container create 21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85 (image=quay.io/ceph/ceph:v19, name=hardcore_kalam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 26 09:42:09 compute-0 systemd[1]: Started libpod-conmon-21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85.scope.
Jan 26 09:42:09 compute-0 podman[90271]: 2026-01-26 09:42:09.632866225 +0000 UTC m=+0.029482195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec314778b821f002d8a83d1873d0b8ebc1a5e67a1f25768770fd20e9f9883a6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec314778b821f002d8a83d1873d0b8ebc1a5e67a1f25768770fd20e9f9883a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec314778b821f002d8a83d1873d0b8ebc1a5e67a1f25768770fd20e9f9883a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:09 compute-0 podman[90271]: 2026-01-26 09:42:09.74967361 +0000 UTC m=+0.146289580 container init 21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85 (image=quay.io/ceph/ceph:v19, name=hardcore_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 09:42:09 compute-0 podman[90271]: 2026-01-26 09:42:09.761962576 +0000 UTC m=+0.158578526 container start 21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85 (image=quay.io/ceph/ceph:v19, name=hardcore_kalam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:09 compute-0 podman[90271]: 2026-01-26 09:42:09.765570483 +0000 UTC m=+0.162186463 container attach 21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85 (image=quay.io/ceph/ceph:v19, name=hardcore_kalam, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 5.8 scrub starts
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 5.8 scrub ok
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 6.11 scrub starts
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 6.11 scrub ok
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 7.15 scrub starts
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 7.15 scrub ok
Jan 26 09:42:10 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1606551457' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mgrmap e12: compute-0.zllcia(active, since 3m), standbys: compute-2.oynaeu, compute-1.xammti
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 5.b scrub starts
Jan 26 09:42:10 compute-0 ceph-mon[74456]: 5.b scrub ok
Jan 26 09:42:10 compute-0 ceph-mon[74456]: pgmap v140: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/588199508' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:10 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:10 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.fbcidm on compute-1
Jan 26 09:42:10 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.fbcidm on compute-1
Jan 26 09:42:10 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 26 09:42:10 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 26 09:42:11 compute-0 ceph-mon[74456]: 6.13 scrub starts
Jan 26 09:42:11 compute-0 ceph-mon[74456]: 6.13 scrub ok
Jan 26 09:42:11 compute-0 ceph-mon[74456]: 2.3 scrub starts
Jan 26 09:42:11 compute-0 ceph-mon[74456]: 2.3 scrub ok
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/588199508' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:11 compute-0 ceph-mon[74456]: from='mgr.14122 192.168.122.100:0/1352844427' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:11 compute-0 ceph-mon[74456]: Deploying daemon rgw.rgw.compute-1.fbcidm on compute-1
Jan 26 09:42:11 compute-0 ceph-mon[74456]: 3.0 scrub starts
Jan 26 09:42:11 compute-0 ceph-mon[74456]: 3.0 scrub ok
Jan 26 09:42:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/588199508' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  1: '-n'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  2: 'mgr.compute-0.zllcia'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  3: '-f'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  4: '--setuser'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  5: 'ceph'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  6: '--setgroup'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  7: 'ceph'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  8: '--default-log-to-file=false'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  9: '--default-log-to-journald=true'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr respawn  exe_path /proc/self/exe
Jan 26 09:42:11 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.zllcia(active, since 3m), standbys: compute-2.oynaeu, compute-1.xammti
Jan 26 09:42:11 compute-0 systemd[1]: libpod-21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 podman[90313]: 2026-01-26 09:42:11.250047176 +0000 UTC m=+0.033802543 container died 21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85 (image=quay.io/ceph/ceph:v19, name=hardcore_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 09:42:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 26 09:42:11 compute-0 sshd-session[76070]: Connection closed by 192.168.122.100 port 42532
Jan 26 09:42:11 compute-0 sshd-session[76014]: Connection closed by 192.168.122.100 port 42518
Jan 26 09:42:11 compute-0 sshd-session[76099]: Connection closed by 192.168.122.100 port 42544
Jan 26 09:42:11 compute-0 sshd-session[75985]: Connection closed by 192.168.122.100 port 42512
Jan 26 09:42:11 compute-0 sshd-session[76043]: Connection closed by 192.168.122.100 port 42528
Jan 26 09:42:11 compute-0 sshd-session[75956]: Connection closed by 192.168.122.100 port 42498
Jan 26 09:42:11 compute-0 sshd-session[75927]: Connection closed by 192.168.122.100 port 42486
Jan 26 09:42:11 compute-0 sshd-session[75869]: Connection closed by 192.168.122.100 port 42460
Jan 26 09:42:11 compute-0 sshd-session[75898]: Connection closed by 192.168.122.100 port 42472
Jan 26 09:42:11 compute-0 sshd-session[75840]: Connection closed by 192.168.122.100 port 42450
Jan 26 09:42:11 compute-0 sshd-session[75811]: Connection closed by 192.168.122.100 port 42434
Jan 26 09:42:11 compute-0 sshd-session[75810]: Connection closed by 192.168.122.100 port 42428
Jan 26 09:42:11 compute-0 sshd-session[76040]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 sshd-session[75982]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 26 09:42:11 compute-0 sshd-session[76096]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 sshd-session[75953]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 31 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 sshd-session[75837]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 sshd-session[75924]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 sshd-session[75895]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-33.scope: Consumed 30.104s CPU time.
Jan 26 09:42:11 compute-0 sshd-session[75866]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 sshd-session[75805]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 26 09:42:11 compute-0 sshd-session[75787]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 sshd-session[76011]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 26 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 33 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 30 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 29 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 23 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 24 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bec314778b821f002d8a83d1873d0b8ebc1a5e67a1f25768770fd20e9f9883a6-merged.mount: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 25 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 28 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 26 09:42:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 27 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 21 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 sshd-session[76067]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:11 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 31.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Session 32 logged out. Waiting for processes to exit.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 26.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 33.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 30.
Jan 26 09:42:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setuser ceph since I am not root
Jan 26 09:42:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setgroup ceph since I am not root
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 29.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 27.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 25.
Jan 26 09:42:11 compute-0 podman[90313]: 2026-01-26 09:42:11.308307024 +0000 UTC m=+0.092062341 container remove 21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85 (image=quay.io/ceph/ceph:v19, name=hardcore_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 28.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 24.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 23.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 21.
Jan 26 09:42:11 compute-0 systemd-logind[787]: Removed session 32.
Jan 26 09:42:11 compute-0 systemd[1]: libpod-conmon-21802167bb398f784819697ee0d6197eeb8a430f6fbfca20d8a6e13d9ad6fe85.scope: Deactivated successfully.
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: pidfile_write: ignore empty --pid-file
Jan 26 09:42:11 compute-0 sudo[90268]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'alerts'
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:42:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:11.448+0000 7fd58f9f8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'balancer'
Jan 26 09:42:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 46 pg[8.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:42:11 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'cephadm'
Jan 26 09:42:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:11.536+0000 7fd58f9f8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:42:11 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 26 09:42:11 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 26 09:42:11 compute-0 sudo[90370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbimbplaqvobyjnjngqpyozyksaojgks ; /usr/bin/python3'
Jan 26 09:42:11 compute-0 sudo[90370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:11 compute-0 python3[90372]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:11 compute-0 podman[90373]: 2026-01-26 09:42:11.820936071 +0000 UTC m=+0.060071889 container create ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1 (image=quay.io/ceph/ceph:v19, name=reverent_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:11 compute-0 systemd[1]: Started libpod-conmon-ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1.scope.
Jan 26 09:42:11 compute-0 podman[90373]: 2026-01-26 09:42:11.795149438 +0000 UTC m=+0.034285276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aed5ad36e3644e504b82e719b63d049be20487b96597dd97edec8db590e3355/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aed5ad36e3644e504b82e719b63d049be20487b96597dd97edec8db590e3355/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aed5ad36e3644e504b82e719b63d049be20487b96597dd97edec8db590e3355/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:11 compute-0 podman[90373]: 2026-01-26 09:42:11.941787465 +0000 UTC m=+0.180923373 container init ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1 (image=quay.io/ceph/ceph:v19, name=reverent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 09:42:11 compute-0 podman[90373]: 2026-01-26 09:42:11.953439164 +0000 UTC m=+0.192574982 container start ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1 (image=quay.io/ceph/ceph:v19, name=reverent_margulis, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:11 compute-0 podman[90373]: 2026-01-26 09:42:11.956927228 +0000 UTC m=+0.196063146 container attach ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1 (image=quay.io/ceph/ceph:v19, name=reverent_margulis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:12 compute-0 ceph-mon[74456]: 4.10 scrub starts
Jan 26 09:42:12 compute-0 ceph-mon[74456]: 4.10 scrub ok
Jan 26 09:42:12 compute-0 ceph-mon[74456]: 7.0 scrub starts
Jan 26 09:42:12 compute-0 ceph-mon[74456]: 7.0 scrub ok
Jan 26 09:42:12 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/588199508' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 26 09:42:12 compute-0 ceph-mon[74456]: mgrmap e13: compute-0.zllcia(active, since 3m), standbys: compute-2.oynaeu, compute-1.xammti
Jan 26 09:42:12 compute-0 ceph-mon[74456]: osdmap e46: 3 total, 3 up, 3 in
Jan 26 09:42:12 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2891557756' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 26 09:42:12 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 26 09:42:12 compute-0 ceph-mon[74456]: 5.0 scrub starts
Jan 26 09:42:12 compute-0 ceph-mon[74456]: 5.0 scrub ok
Jan 26 09:42:12 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'crash'
Jan 26 09:42:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 26 09:42:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 26 09:42:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 26 09:42:12 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 26 09:42:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 47 pg[8.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:12 compute-0 ceph-mgr[74755]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:42:12 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'dashboard'
Jan 26 09:42:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:12.345+0000 7fd58f9f8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:42:12 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 26 09:42:12 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 26 09:42:12 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'devicehealth'
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:13.033+0000 7fd58f9f8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 09:42:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 09:42:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 09:42:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   from numpy import show_config as show_numpy_config
Jan 26 09:42:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:13.205+0000 7fd58f9f8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'influx'
Jan 26 09:42:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 26 09:42:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:13.275+0000 7fd58f9f8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'insights'
Jan 26 09:42:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 26 09:42:13 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'iostat'
Jan 26 09:42:13 compute-0 ceph-mon[74456]: 4.f scrub starts
Jan 26 09:42:13 compute-0 ceph-mon[74456]: 4.f scrub ok
Jan 26 09:42:13 compute-0 ceph-mon[74456]: 2.2 deep-scrub starts
Jan 26 09:42:13 compute-0 ceph-mon[74456]: 2.2 deep-scrub ok
Jan 26 09:42:13 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 26 09:42:13 compute-0 ceph-mon[74456]: osdmap e47: 3 total, 3 up, 3 in
Jan 26 09:42:13 compute-0 ceph-mon[74456]: 5.4 scrub starts
Jan 26 09:42:13 compute-0 ceph-mon[74456]: 5.4 scrub ok
Jan 26 09:42:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 26 09:42:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 09:42:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 26 09:42:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 09:42:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:13.405+0000 7fd58f9f8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'k8sevents'
Jan 26 09:42:13 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 26 09:42:13 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 48 pg[9.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:13 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'localpool'
Jan 26 09:42:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mirroring'
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'nfs'
Jan 26 09:42:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 26 09:42:14 compute-0 ceph-mon[74456]: 6.c scrub starts
Jan 26 09:42:14 compute-0 ceph-mon[74456]: 6.c scrub ok
Jan 26 09:42:14 compute-0 ceph-mon[74456]: 7.7 scrub starts
Jan 26 09:42:14 compute-0 ceph-mon[74456]: 7.7 scrub ok
Jan 26 09:42:14 compute-0 ceph-mon[74456]: osdmap e48: 3 total, 3 up, 3 in
Jan 26 09:42:14 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1812478715' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 09:42:14 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/992292627' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 09:42:14 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 09:42:14 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 09:42:14 compute-0 ceph-mon[74456]: 5.e scrub starts
Jan 26 09:42:14 compute-0 ceph-mon[74456]: 5.e scrub ok
Jan 26 09:42:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 09:42:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 09:42:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 26 09:42:14 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 26 09:42:14 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 49 pg[9.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:14.424+0000 7fd58f9f8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'orchestrator'
Jan 26 09:42:14 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 26 09:42:14 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 26 09:42:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:14.657+0000 7fd58f9f8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 09:42:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:14.737+0000 7fd58f9f8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_support'
Jan 26 09:42:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:14.806+0000 7fd58f9f8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 09:42:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:14.887+0000 7fd58f9f8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'progress'
Jan 26 09:42:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:14.958+0000 7fd58f9f8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:42:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'prometheus'
Jan 26 09:42:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:15.310+0000 7fd58f9f8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:42:15 compute-0 ceph-mgr[74755]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:42:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rbd_support'
Jan 26 09:42:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 26 09:42:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 26 09:42:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:15.409+0000 7fd58f9f8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:42:15 compute-0 ceph-mgr[74755]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:42:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'restful'
Jan 26 09:42:15 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 26 09:42:15 compute-0 ceph-mon[74456]: 6.f scrub starts
Jan 26 09:42:15 compute-0 ceph-mon[74456]: 6.f scrub ok
Jan 26 09:42:15 compute-0 ceph-mon[74456]: 7.1 scrub starts
Jan 26 09:42:15 compute-0 ceph-mon[74456]: 7.1 scrub ok
Jan 26 09:42:15 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 09:42:15 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 09:42:15 compute-0 ceph-mon[74456]: osdmap e49: 3 total, 3 up, 3 in
Jan 26 09:42:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 26 09:42:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 09:42:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 26 09:42:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 09:42:15 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 26 09:42:15 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 26 09:42:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rgw'
Jan 26 09:42:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:15.850+0000 7fd58f9f8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:42:15 compute-0 ceph-mgr[74755]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:42:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rook'
Jan 26 09:42:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 26 09:42:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 09:42:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 09:42:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 26 09:42:16 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 26 09:42:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:16.423+0000 7fd58f9f8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'selftest'
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 5.d scrub starts
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 5.d scrub ok
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 4.0 scrub starts
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 4.0 scrub ok
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 7.d scrub starts
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 7.d scrub ok
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 5.1a scrub starts
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 5.1a scrub ok
Jan 26 09:42:16 compute-0 ceph-mon[74456]: osdmap e50: 3 total, 3 up, 3 in
Jan 26 09:42:16 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/992292627' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 09:42:16 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1812478715' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 09:42:16 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 09:42:16 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 6.0 scrub starts
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 6.0 scrub ok
Jan 26 09:42:16 compute-0 ceph-mon[74456]: 6.1e scrub starts
Jan 26 09:42:16 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 09:42:16 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 09:42:16 compute-0 ceph-mon[74456]: osdmap e51: 3 total, 3 up, 3 in
Jan 26 09:42:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:16.498+0000 7fd58f9f8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'snap_schedule'
Jan 26 09:42:16 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Jan 26 09:42:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:16.595+0000 7fd58f9f8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'stats'
Jan 26 09:42:16 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'status'
Jan 26 09:42:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:16.760+0000 7fd58f9f8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telegraf'
Jan 26 09:42:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:16.832+0000 7fd58f9f8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telemetry'
Jan 26 09:42:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:16.982+0000 7fd58f9f8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:42:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 09:42:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:17.194+0000 7fd58f9f8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'volumes'
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 26 09:42:17 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 52 pg[11.0( empty local-lis/les=0/0 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [0] r=0 lpr=52 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu started
Jan 26 09:42:17 compute-0 ceph-mon[74456]: 7.c scrub starts
Jan 26 09:42:17 compute-0 ceph-mon[74456]: 7.c scrub ok
Jan 26 09:42:17 compute-0 ceph-mon[74456]: 6.1e scrub ok
Jan 26 09:42:17 compute-0 ceph-mon[74456]: 4.4 deep-scrub starts
Jan 26 09:42:17 compute-0 ceph-mon[74456]: 4.4 deep-scrub ok
Jan 26 09:42:17 compute-0 ceph-mon[74456]: osdmap e52: 3 total, 3 up, 3 in
Jan 26 09:42:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:17.458+0000 7fd58f9f8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'zabbix'
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:17.532+0000 7fd58f9f8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zllcia restarted
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zllcia
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: ms_deliver_dispatch: unhandled message 0x55e41e81b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr handle_mgr_map Activating!
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.zllcia(active, starting, since 0.0391316s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr handle_mgr_map I am now activating
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e1 all = 1
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [0] r=0 lpr=52 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: balancer
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Manager daemon compute-0.zllcia is now available
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [balancer INFO root] Starting
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:42:17
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 26 09:42:17 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 26 09:42:17 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: cephadm
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: crash
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: dashboard
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: devicehealth
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: iostat
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Starting
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: nfs
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO sso] Loading SSO DB version=1
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: orchestrator
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: pg_autoscaler
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: progress
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [progress INFO root] Loading...
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fd514da11f0>, <progress.module.GhostEvent object at 0x7fd514da1430>, <progress.module.GhostEvent object at 0x7fd514da1460>, <progress.module.GhostEvent object at 0x7fd514da1490>, <progress.module.GhostEvent object at 0x7fd514da14c0>, <progress.module.GhostEvent object at 0x7fd514da14f0>, <progress.module.GhostEvent object at 0x7fd514da1520>, <progress.module.GhostEvent object at 0x7fd514da1550>, <progress.module.GhostEvent object at 0x7fd514da1580>, <progress.module.GhostEvent object at 0x7fd514da15b0>, <progress.module.GhostEvent object at 0x7fd514da15e0>, <progress.module.GhostEvent object at 0x7fd514da1610>] historic events
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded OSDMap, ready.
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] recovery thread starting
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] starting setup
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: rbd_support
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: restful
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [restful INFO root] server_addr: :: server_port: 8003
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: status
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: telemetry
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [restful WARNING root] server not running: no certificate configured
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] PerfHandler: starting
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TaskHandler: starting
Jan 26 09:42:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"} v 0)
Jan 26 09:42:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [rbd_support INFO root] setup complete
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: volumes
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 26 09:42:17 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 26 09:42:18 compute-0 sshd-session[90552]: Accepted publickey for ceph-admin from 192.168.122.100 port 39310 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:42:18 compute-0 systemd-logind[787]: New session 34 of user ceph-admin.
Jan 26 09:42:18 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Jan 26 09:42:18 compute-0 sshd-session[90552]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:42:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.module] Engine started.
Jan 26 09:42:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti restarted
Jan 26 09:42:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti started
Jan 26 09:42:18 compute-0 sudo[90568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:18 compute-0 sudo[90568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:18 compute-0 sudo[90568]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:18 compute-0 sudo[90593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:42:18 compute-0 sudo[90593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:18 compute-0 ceph-mon[74456]: 7.19 scrub starts
Jan 26 09:42:18 compute-0 ceph-mon[74456]: 7.19 scrub ok
Jan 26 09:42:18 compute-0 ceph-mon[74456]: 4.1c scrub starts
Jan 26 09:42:18 compute-0 ceph-mon[74456]: 4.1c scrub ok
Jan 26 09:42:18 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:42:18 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu started
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/992292627' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1812478715' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: Active manager daemon compute-0.zllcia restarted
Jan 26 09:42:18 compute-0 ceph-mon[74456]: Activating manager daemon compute-0.zllcia
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 09:42:18 compute-0 ceph-mon[74456]: osdmap e53: 3 total, 3 up, 3 in
Jan 26 09:42:18 compute-0 ceph-mon[74456]: mgrmap e14: compute-0.zllcia(active, starting, since 0.0391316s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1812478715' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/992292627' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: Manager daemon compute-0.zllcia is now available
Jan 26 09:42:18 compute-0 ceph-mon[74456]: 6.6 scrub starts
Jan 26 09:42:18 compute-0 ceph-mon[74456]: 6.6 scrub ok
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti restarted
Jan 26 09:42:18 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti started
Jan 26 09:42:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 26 09:42:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 09:42:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 09:42:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 26 09:42:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 26 09:42:18 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 26 09:42:18 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 26 09:42:18 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14364 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.zllcia(active, since 1.0904s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Jan 26 09:42:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:18 compute-0 reverent_margulis[90390]: Option GRAFANA_API_USERNAME updated
Jan 26 09:42:18 compute-0 systemd[1]: libpod-ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1.scope: Deactivated successfully.
Jan 26 09:42:18 compute-0 podman[90373]: 2026-01-26 09:42:18.677632303 +0000 UTC m=+6.916768141 container died ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1 (image=quay.io/ceph/ceph:v19, name=reverent_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aed5ad36e3644e504b82e719b63d049be20487b96597dd97edec8db590e3355-merged.mount: Deactivated successfully.
Jan 26 09:42:18 compute-0 podman[90373]: 2026-01-26 09:42:18.717581523 +0000 UTC m=+6.956717341 container remove ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1 (image=quay.io/ceph/ceph:v19, name=reverent_margulis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:18 compute-0 systemd[1]: libpod-conmon-ea14efbbcf9c3c020492cca917ef36feb8154569bc9431d0da9382846d638ae1.scope: Deactivated successfully.
Jan 26 09:42:18 compute-0 sudo[90370]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:18 compute-0 podman[90700]: 2026-01-26 09:42:18.81761618 +0000 UTC m=+0.058228128 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:18 compute-0 sudo[90743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxwiyvznlbupwqlmrtydthrvwilqqynq ; /usr/bin/python3'
Jan 26 09:42:18 compute-0 sudo[90743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:18 compute-0 podman[90700]: 2026-01-26 09:42:18.938337641 +0000 UTC m=+0.178949579 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:42:19 compute-0 python3[90745]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:19] ENGINE Bus STARTING
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:19] ENGINE Bus STARTING
Jan 26 09:42:19 compute-0 podman[90778]: 2026-01-26 09:42:19.111926844 +0000 UTC m=+0.050659932 container create f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:19 compute-0 systemd[1]: Started libpod-conmon-f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c.scope.
Jan 26 09:42:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f706a48811a4c6054de24fb061c447c4f49a6216af593b56dcc69fd172fdfc15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f706a48811a4c6054de24fb061c447c4f49a6216af593b56dcc69fd172fdfc15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f706a48811a4c6054de24fb061c447c4f49a6216af593b56dcc69fd172fdfc15/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:19 compute-0 podman[90778]: 2026-01-26 09:42:19.085495894 +0000 UTC m=+0.024229002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:19 compute-0 podman[90778]: 2026-01-26 09:42:19.200105738 +0000 UTC m=+0.138838856 container init f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 26 09:42:19 compute-0 podman[90778]: 2026-01-26 09:42:19.207532151 +0000 UTC m=+0.146265239 container start f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:42:19 compute-0 podman[90778]: 2026-01-26 09:42:19.210992005 +0000 UTC m=+0.149725183 container attach f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:19] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:19] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:19] ENGINE Client ('192.168.122.100', 53286) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:19] ENGINE Client ('192.168.122.100', 53286) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:19 compute-0 sudo[90593]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:19] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:19] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:19] ENGINE Bus STARTED
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:19] ENGINE Bus STARTED
Jan 26 09:42:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 sudo[90873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:19 compute-0 sudo[90873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:19 compute-0 sudo[90873]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:19 compute-0 sudo[90898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:42:19 compute-0 sudo[90898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:19 compute-0 ceph-mon[74456]: 7.1a scrub starts
Jan 26 09:42:19 compute-0 ceph-mon[74456]: 7.1a scrub ok
Jan 26 09:42:19 compute-0 ceph-mon[74456]: 7.1f scrub starts
Jan 26 09:42:19 compute-0 ceph-mon[74456]: 7.1f scrub ok
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-2.fgzdbm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='client.? ' entity='client.rgw.rgw.compute-1.fbcidm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 09:42:19 compute-0 ceph-mon[74456]: osdmap e54: 3 total, 3 up, 3 in
Jan 26 09:42:19 compute-0 ceph-mon[74456]: 6.b scrub starts
Jan 26 09:42:19 compute-0 ceph-mon[74456]: 6.b scrub ok
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mgrmap e15: compute-0.zllcia(active, since 1.0904s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:19] ENGINE Bus STARTING
Jan 26 09:42:19 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:19] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:42:19 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:19] ENGINE Client ('192.168.122.100', 53286) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:19] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:42:19 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:19] ENGINE Bus STARTED
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14397 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Jan 26 09:42:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:19 compute-0 pedantic_pike[90822]: Option GRAFANA_API_PASSWORD updated
Jan 26 09:42:19 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 26 09:42:19 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 26 09:42:19 compute-0 systemd[1]: libpod-f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c.scope: Deactivated successfully.
Jan 26 09:42:19 compute-0 podman[90778]: 2026-01-26 09:42:19.633044132 +0000 UTC m=+0.571777220 container died f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f706a48811a4c6054de24fb061c447c4f49a6216af593b56dcc69fd172fdfc15-merged.mount: Deactivated successfully.
Jan 26 09:42:19 compute-0 podman[90778]: 2026-01-26 09:42:19.668149099 +0000 UTC m=+0.606882187 container remove f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c (image=quay.io/ceph/ceph:v19, name=pedantic_pike, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:19 compute-0 systemd[1]: libpod-conmon-f4a03f250cece7c986d50e0f2d0ffc571cd34ab4c430d76828bb8c0fa1da997c.scope: Deactivated successfully.
Jan 26 09:42:19 compute-0 sudo[90743]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:19 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 09:42:19 compute-0 sudo[91001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkjjrfrjroumfuuxhjjuskitbzdwnkwc ; /usr/bin/python3'
Jan 26 09:42:19 compute-0 sudo[91001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:19 compute-0 sudo[90898]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 python3[91003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:20 compute-0 sudo[91004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:20 compute-0 podman[91016]: 2026-01-26 09:42:20.097048973 +0000 UTC m=+0.036952118 container create f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616 (image=quay.io/ceph/ceph:v19, name=keen_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:42:20 compute-0 sudo[91004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:20 compute-0 sudo[91004]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 systemd[1]: Started libpod-conmon-f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616.scope.
Jan 26 09:42:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ceafc2a08dc7dc4cdbd7719016b0c5b6ec524b755970eb0045019f26b30ac1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ceafc2a08dc7dc4cdbd7719016b0c5b6ec524b755970eb0045019f26b30ac1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7ceafc2a08dc7dc4cdbd7719016b0c5b6ec524b755970eb0045019f26b30ac1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:20 compute-0 podman[91016]: 2026-01-26 09:42:20.081345255 +0000 UTC m=+0.021248430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:20 compute-0 sudo[91042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 26 09:42:20 compute-0 podman[91016]: 2026-01-26 09:42:20.17614931 +0000 UTC m=+0.116052465 container init f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616 (image=quay.io/ceph/ceph:v19, name=keen_jang, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:42:20 compute-0 sudo[91042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:20 compute-0 podman[91016]: 2026-01-26 09:42:20.187489319 +0000 UTC m=+0.127392464 container start f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616 (image=quay.io/ceph/ceph:v19, name=keen_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 09:42:20 compute-0 podman[91016]: 2026-01-26 09:42:20.191447086 +0000 UTC m=+0.131350281 container attach f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616 (image=quay.io/ceph/ceph:v19, name=keen_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.zllcia(active, since 2s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 26 09:42:20 compute-0 sudo[91042]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:42:20 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Jan 26 09:42:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 keen_jang[91062]: Option ALERTMANAGER_API_HOST updated
Jan 26 09:42:20 compute-0 systemd[1]: libpod-f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616.scope: Deactivated successfully.
Jan 26 09:42:20 compute-0 podman[91016]: 2026-01-26 09:42:20.591133124 +0000 UTC m=+0.531036279 container died f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616 (image=quay.io/ceph/ceph:v19, name=keen_jang, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:20 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 26 09:42:20 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 26 09:42:20 compute-0 ceph-mon[74456]: 4.1b scrub starts
Jan 26 09:42:20 compute-0 ceph-mon[74456]: 4.1b scrub ok
Jan 26 09:42:20 compute-0 ceph-mon[74456]: 2.18 scrub starts
Jan 26 09:42:20 compute-0 ceph-mon[74456]: 2.18 scrub ok
Jan 26 09:42:20 compute-0 ceph-mon[74456]: pgmap v5: 197 pgs: 197 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='client.14397 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: 6.18 scrub starts
Jan 26 09:42:20 compute-0 ceph-mon[74456]: 6.18 scrub ok
Jan 26 09:42:20 compute-0 ceph-mon[74456]: mgrmap e16: compute-0.zllcia(active, since 2s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:42:20 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:20 compute-0 sudo[91112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7ceafc2a08dc7dc4cdbd7719016b0c5b6ec524b755970eb0045019f26b30ac1-merged.mount: Deactivated successfully.
Jan 26 09:42:20 compute-0 sudo[91112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:20 compute-0 sudo[91112]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 podman[91016]: 2026-01-26 09:42:20.635641657 +0000 UTC m=+0.575544802 container remove f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616 (image=quay.io/ceph/ceph:v19, name=keen_jang, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 09:42:20 compute-0 systemd[1]: libpod-conmon-f05c5e709c5000f96a3c660631db17b5956f2053968d59ea99ffa190644e6616.scope: Deactivated successfully.
Jan 26 09:42:20 compute-0 sudo[91001]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 sudo[91149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:42:20 compute-0 sudo[91149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:20 compute-0 sudo[91149]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 sudo[91174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:20 compute-0 sudo[91174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:20 compute-0 sudo[91174]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 sudo[91239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynzwluiksirfkbilemueghfmzbfhyktg ; /usr/bin/python3'
Jan 26 09:42:20 compute-0 sudo[91239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:20 compute-0 sudo[91206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:20 compute-0 sudo[91206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:20 compute-0 sudo[91206]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 sudo[91250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:20 compute-0 sudo[91250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:20 compute-0 sudo[91250]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:20 compute-0 python3[91248]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:20 compute-0 sudo[91298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:20 compute-0 sudo[91298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91298]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 podman[91304]: 2026-01-26 09:42:21.00180084 +0000 UTC m=+0.035245582 container create e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421 (image=quay.io/ceph/ceph:v19, name=sweet_feistel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:21 compute-0 systemd[1]: Started libpod-conmon-e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421.scope.
Jan 26 09:42:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:21 compute-0 sudo[91336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cd62bbd3e85f62473caac363bcaee8d9ccc187185bc784d7f9a1d75f759b31/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cd62bbd3e85f62473caac363bcaee8d9ccc187185bc784d7f9a1d75f759b31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4cd62bbd3e85f62473caac363bcaee8d9ccc187185bc784d7f9a1d75f759b31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:21 compute-0 sudo[91336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91336]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 podman[91304]: 2026-01-26 09:42:20.984252232 +0000 UTC m=+0.017696994 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:21 compute-0 podman[91304]: 2026-01-26 09:42:21.081429931 +0000 UTC m=+0.114874693 container init e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421 (image=quay.io/ceph/ceph:v19, name=sweet_feistel, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:21 compute-0 podman[91304]: 2026-01-26 09:42:21.091101905 +0000 UTC m=+0.124546637 container start e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421 (image=quay.io/ceph/ceph:v19, name=sweet_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:21 compute-0 podman[91304]: 2026-01-26 09:42:21.094299782 +0000 UTC m=+0.127744524 container attach e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421 (image=quay.io/ceph/ceph:v19, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 sudo[91366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 26 09:42:21 compute-0 sudo[91366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91366]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 sudo[91392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:21 compute-0 sudo[91392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91392]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:21 compute-0 sudo[91431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91431]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:21 compute-0 sudo[91461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91461]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:21 compute-0 sudo[91486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91486]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:21 compute-0 sudo[91511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91511]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14415 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Jan 26 09:42:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:21 compute-0 sweet_feistel[91361]: Option PROMETHEUS_API_HOST updated
Jan 26 09:42:21 compute-0 systemd[1]: libpod-e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421.scope: Deactivated successfully.
Jan 26 09:42:21 compute-0 podman[91304]: 2026-01-26 09:42:21.456628071 +0000 UTC m=+0.490072813 container died e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421 (image=quay.io/ceph/ceph:v19, name=sweet_feistel, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4cd62bbd3e85f62473caac363bcaee8d9ccc187185bc784d7f9a1d75f759b31-merged.mount: Deactivated successfully.
Jan 26 09:42:21 compute-0 podman[91304]: 2026-01-26 09:42:21.497784352 +0000 UTC m=+0.531229094 container remove e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421 (image=quay.io/ceph/ceph:v19, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 09:42:21 compute-0 systemd[1]: libpod-conmon-e289c7d48c8081fdf18a889ddf567ceb5ed82ab126b59bb5c5f17f2d91049421.scope: Deactivated successfully.
Jan 26 09:42:21 compute-0 sudo[91239]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:21 compute-0 sudo[91562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91562]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:21 compute-0 sudo[91598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91598]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:21 compute-0 sudo[91623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 26 09:42:21 compute-0 sudo[91623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91623]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:21 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 26 09:42:21 compute-0 sudo[91691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oldubuurdcykcwkfprhbyvtwvatsqzlc ; /usr/bin/python3'
Jan 26 09:42:21 compute-0 sudo[91691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:21 compute-0 sudo[91656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:42:21 compute-0 sudo[91656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91656]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:21 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:21 compute-0 ceph-mon[74456]: 5.7 scrub starts
Jan 26 09:42:21 compute-0 ceph-mon[74456]: 5.7 scrub ok
Jan 26 09:42:21 compute-0 ceph-mon[74456]: 6.12 scrub starts
Jan 26 09:42:21 compute-0 ceph-mon[74456]: 6.12 scrub ok
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mon[74456]: from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:21 compute-0 ceph-mon[74456]: 6.4 scrub starts
Jan 26 09:42:21 compute-0 ceph-mon[74456]: 6.4 scrub ok
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:21 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:21 compute-0 sudo[91699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:42:21 compute-0 sudo[91699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91699]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:21 compute-0 python3[91696]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:21 compute-0 sudo[91724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91724]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 sudo[91750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:21 compute-0 podman[91749]: 2026-01-26 09:42:21.875870281 +0000 UTC m=+0.048144834 container create 36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73 (image=quay.io/ceph/ceph:v19, name=jolly_lederberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 09:42:21 compute-0 sudo[91750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91750]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 systemd[1]: Started libpod-conmon-36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73.scope.
Jan 26 09:42:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01dff031df04c15f94e19020ea7c4711e908e886bd0d9ea72f8630dddfa3f77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01dff031df04c15f94e19020ea7c4711e908e886bd0d9ea72f8630dddfa3f77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01dff031df04c15f94e19020ea7c4711e908e886bd0d9ea72f8630dddfa3f77/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:21 compute-0 sudo[91787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:21 compute-0 sudo[91787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:21 compute-0 sudo[91787]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:21 compute-0 podman[91749]: 2026-01-26 09:42:21.946498927 +0000 UTC m=+0.118773510 container init 36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73 (image=quay.io/ceph/ceph:v19, name=jolly_lederberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 26 09:42:21 compute-0 podman[91749]: 2026-01-26 09:42:21.853915533 +0000 UTC m=+0.026190146 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:21 compute-0 podman[91749]: 2026-01-26 09:42:21.953178959 +0000 UTC m=+0.125453512 container start 36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73 (image=quay.io/ceph/ceph:v19, name=jolly_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:21 compute-0 podman[91749]: 2026-01-26 09:42:21.956380266 +0000 UTC m=+0.128654849 container attach 36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73 (image=quay.io/ceph/ceph:v19, name=jolly_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:22 compute-0 sudo[91841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:22 compute-0 sudo[91841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[91841]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 sudo[91885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:22 compute-0 sudo[91885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[91885]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 sudo[91910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 sudo[91910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[91910]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 sudo[91935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:22 compute-0 sudo[91935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[91935]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.24161 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 jolly_lederberg[91799]: Option GRAFANA_API_URL updated
Jan 26 09:42:22 compute-0 sudo[91960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:22 compute-0 systemd[1]: libpod-36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73.scope: Deactivated successfully.
Jan 26 09:42:22 compute-0 sudo[91960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 podman[91749]: 2026-01-26 09:42:22.321271005 +0000 UTC m=+0.493545588 container died 36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73 (image=quay.io/ceph/ceph:v19, name=jolly_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:22 compute-0 sudo[91960]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d01dff031df04c15f94e19020ea7c4711e908e886bd0d9ea72f8630dddfa3f77-merged.mount: Deactivated successfully.
Jan 26 09:42:22 compute-0 podman[91749]: 2026-01-26 09:42:22.363574968 +0000 UTC m=+0.535849521 container remove 36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73 (image=quay.io/ceph/ceph:v19, name=jolly_lederberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:22 compute-0 systemd[1]: libpod-conmon-36f3c6127daf0315339769a28d4739480977e01a4e193b5427cdddf4b8579e73.scope: Deactivated successfully.
Jan 26 09:42:22 compute-0 sudo[91988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:22 compute-0 sudo[91988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[91988]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 sudo[91691]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 sudo[92023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:22 compute-0 sudo[92023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[92023]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.zllcia(active, since 4s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:22 compute-0 sudo[92048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:22 compute-0 sudo[92048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[92048]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 sudo[92095]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxiwpigqrijkppfpovzsgctpjwzekbfv ; /usr/bin/python3'
Jan 26 09:42:22 compute-0 sudo[92095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:22 compute-0 sudo[92122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:22 compute-0 sudo[92122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[92122]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.1f deep-scrub starts
Jan 26 09:42:22 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.1f deep-scrub ok
Jan 26 09:42:22 compute-0 sudo[92147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:22 compute-0 sudo[92147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[92147]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 python3[92102]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:22 compute-0 sudo[92172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 sudo[92172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[92172]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: 4.18 scrub starts
Jan 26 09:42:22 compute-0 ceph-mon[74456]: 4.18 scrub ok
Jan 26 09:42:22 compute-0 ceph-mon[74456]: from='client.14415 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:22 compute-0 ceph-mon[74456]: 7.11 scrub starts
Jan 26 09:42:22 compute-0 ceph-mon[74456]: 7.11 scrub ok
Jan 26 09:42:22 compute-0 ceph-mon[74456]: pgmap v6: 197 pgs: 197 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:22 compute-0 ceph-mon[74456]: 6.9 scrub starts
Jan 26 09:42:22 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mon[74456]: 6.9 scrub ok
Jan 26 09:42:22 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:22 compute-0 ceph-mon[74456]: from='client.24161 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:22 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mgrmap e17: compute-0.zllcia(active, since 4s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:22 compute-0 podman[92184]: 2026-01-26 09:42:22.756413928 +0000 UTC m=+0.039147638 container create e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e (image=quay.io/ceph/ceph:v19, name=pensive_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 systemd[1]: Started libpod-conmon-e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e.scope.
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618ff3c43386d8a4a6e4f38935a1ac410e8e33c23dceb7b9ca1f9bfb9cea4f48/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618ff3c43386d8a4a6e4f38935a1ac410e8e33c23dceb7b9ca1f9bfb9cea4f48/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618ff3c43386d8a4a6e4f38935a1ac410e8e33c23dceb7b9ca1f9bfb9cea4f48/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:42:22 compute-0 podman[92184]: 2026-01-26 09:42:22.823886039 +0000 UTC m=+0.106619749 container init e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e (image=quay.io/ceph/ceph:v19, name=pensive_hodgkin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:22 compute-0 podman[92184]: 2026-01-26 09:42:22.830537449 +0000 UTC m=+0.113271159 container start e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e (image=quay.io/ceph/ceph:v19, name=pensive_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:42:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:22 compute-0 podman[92184]: 2026-01-26 09:42:22.833403327 +0000 UTC m=+0.116137037 container attach e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e (image=quay.io/ceph/ceph:v19, name=pensive_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:22 compute-0 podman[92184]: 2026-01-26 09:42:22.739688322 +0000 UTC m=+0.022422052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 2fabb162-d1c5-46ef-934e-7d5a4abf2389 (Updating node-exporter deployment (+3 -> 3))
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Jan 26 09:42:22 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Jan 26 09:42:22 compute-0 sudo[92216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:22 compute-0 sudo[92216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:22 compute-0 sudo[92216]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:22 compute-0 sudo[92241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:22 compute-0 sudo[92241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 26 09:42:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1657408057' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 26 09:42:23 compute-0 systemd[1]: Reloading.
Jan 26 09:42:23 compute-0 systemd-sysv-generator[92358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:42:23 compute-0 systemd-rc-local-generator[92354]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:42:23 compute-0 systemd[1]: Reloading.
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v7: 197 pgs: 197 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:23 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 26 09:42:23 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 26 09:42:23 compute-0 systemd-sysv-generator[92400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:42:23 compute-0 systemd-rc-local-generator[92395]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:42:23 compute-0 ceph-mon[74456]: 6.1a scrub starts
Jan 26 09:42:23 compute-0 ceph-mon[74456]: 6.1a scrub ok
Jan 26 09:42:23 compute-0 ceph-mon[74456]: 2.15 scrub starts
Jan 26 09:42:23 compute-0 ceph-mon[74456]: 2.15 scrub ok
Jan 26 09:42:23 compute-0 ceph-mon[74456]: 6.1f deep-scrub starts
Jan 26 09:42:23 compute-0 ceph-mon[74456]: 6.1f deep-scrub ok
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='mgr.14358 192.168.122.100:0/344283424' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:23 compute-0 ceph-mon[74456]: Deploying daemon node-exporter.compute-0 on compute-0
Jan 26 09:42:23 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1657408057' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 26 09:42:23 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:42:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1657408057' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  1: '-n'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  2: 'mgr.compute-0.zllcia'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  3: '-f'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  4: '--setuser'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  5: 'ceph'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  6: '--setgroup'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  7: 'ceph'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  8: '--default-log-to-file=false'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  9: '--default-log-to-journald=true'
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 26 09:42:23 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.zllcia(active, since 6s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: mgr respawn  exe_path /proc/self/exe
Jan 26 09:42:23 compute-0 systemd[1]: libpod-e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e.scope: Deactivated successfully.
Jan 26 09:42:23 compute-0 conmon[92212]: conmon e48f6d060c23c7e6016e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e.scope/container/memory.events
Jan 26 09:42:23 compute-0 podman[92184]: 2026-01-26 09:42:23.893700166 +0000 UTC m=+1.176433876 container died e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e (image=quay.io/ceph/ceph:v19, name=pensive_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-618ff3c43386d8a4a6e4f38935a1ac410e8e33c23dceb7b9ca1f9bfb9cea4f48-merged.mount: Deactivated successfully.
Jan 26 09:42:23 compute-0 podman[92184]: 2026-01-26 09:42:23.940633425 +0000 UTC m=+1.223367135 container remove e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e (image=quay.io/ceph/ceph:v19, name=pensive_hodgkin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 09:42:23 compute-0 sshd-session[90566]: Connection closed by 192.168.122.100 port 39310
Jan 26 09:42:23 compute-0 systemd[1]: libpod-conmon-e48f6d060c23c7e6016eecf6f5949bd91858c46626a75bb753dbda2aca0e277e.scope: Deactivated successfully.
Jan 26 09:42:23 compute-0 sshd-session[90552]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:42:23 compute-0 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Jan 26 09:42:23 compute-0 sudo[92095]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setuser ceph since I am not root
Jan 26 09:42:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setgroup ceph since I am not root
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 26 09:42:23 compute-0 ceph-mgr[74755]: pidfile_write: ignore empty --pid-file
Jan 26 09:42:24 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'alerts'
Jan 26 09:42:24 compute-0 bash[92481]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Jan 26 09:42:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:24.108+0000 7f3c67187140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:42:24 compute-0 ceph-mgr[74755]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:42:24 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'balancer'
Jan 26 09:42:24 compute-0 sudo[92515]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rinhevaxnxfuoamqexqbqbdmzalpdcck ; /usr/bin/python3'
Jan 26 09:42:24 compute-0 sudo[92515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:24.192+0000 7f3c67187140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:42:24 compute-0 ceph-mgr[74755]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:42:24 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'cephadm'
Jan 26 09:42:24 compute-0 python3[92518]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:24 compute-0 podman[92519]: 2026-01-26 09:42:24.310739666 +0000 UTC m=+0.044360850 container create 6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b (image=quay.io/ceph/ceph:v19, name=awesome_ramanujan, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:24 compute-0 systemd[1]: Started libpod-conmon-6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b.scope.
Jan 26 09:42:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b22ec23af0811863b4f90c9041f782438694badf8f6971ba7973594e621e7b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b22ec23af0811863b4f90c9041f782438694badf8f6971ba7973594e621e7b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:24 compute-0 podman[92519]: 2026-01-26 09:42:24.295600904 +0000 UTC m=+0.029222108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b22ec23af0811863b4f90c9041f782438694badf8f6971ba7973594e621e7b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:24 compute-0 podman[92519]: 2026-01-26 09:42:24.400895344 +0000 UTC m=+0.134516578 container init 6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b (image=quay.io/ceph/ceph:v19, name=awesome_ramanujan, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:24 compute-0 podman[92519]: 2026-01-26 09:42:24.416255103 +0000 UTC m=+0.149876287 container start 6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b (image=quay.io/ceph/ceph:v19, name=awesome_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 09:42:24 compute-0 podman[92519]: 2026-01-26 09:42:24.419660066 +0000 UTC m=+0.153281260 container attach 6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b (image=quay.io/ceph/ceph:v19, name=awesome_ramanujan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 09:42:24 compute-0 bash[92481]: Getting image source signatures
Jan 26 09:42:24 compute-0 bash[92481]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Jan 26 09:42:24 compute-0 bash[92481]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Jan 26 09:42:24 compute-0 bash[92481]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Jan 26 09:42:24 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 26 09:42:24 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 26 09:42:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 26 09:42:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/974467440' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 26 09:42:24 compute-0 ceph-mon[74456]: 5.1c deep-scrub starts
Jan 26 09:42:24 compute-0 ceph-mon[74456]: 5.1c deep-scrub ok
Jan 26 09:42:24 compute-0 ceph-mon[74456]: 7.1d deep-scrub starts
Jan 26 09:42:24 compute-0 ceph-mon[74456]: 7.1d deep-scrub ok
Jan 26 09:42:24 compute-0 ceph-mon[74456]: 4.1e scrub starts
Jan 26 09:42:24 compute-0 ceph-mon[74456]: 4.1e scrub ok
Jan 26 09:42:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1657408057' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 26 09:42:24 compute-0 ceph-mon[74456]: mgrmap e18: compute-0.zllcia(active, since 6s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/974467440' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 26 09:42:24 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'crash'
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:25.002+0000 7f3c67187140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'dashboard'
Jan 26 09:42:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/974467440' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 26 09:42:25 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.zllcia(active, since 7s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:25 compute-0 bash[92481]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Jan 26 09:42:25 compute-0 systemd[1]: libpod-6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b.scope: Deactivated successfully.
Jan 26 09:42:25 compute-0 podman[92519]: 2026-01-26 09:42:25.134307121 +0000 UTC m=+0.867928305 container died 6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b (image=quay.io/ceph/ceph:v19, name=awesome_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Jan 26 09:42:25 compute-0 bash[92481]: Writing manifest to image destination
Jan 26 09:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b22ec23af0811863b4f90c9041f782438694badf8f6971ba7973594e621e7b5-merged.mount: Deactivated successfully.
Jan 26 09:42:25 compute-0 podman[92481]: 2026-01-26 09:42:25.162992342 +0000 UTC m=+1.117341524 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 26 09:42:25 compute-0 podman[92519]: 2026-01-26 09:42:25.17172731 +0000 UTC m=+0.905348494 container remove 6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b (image=quay.io/ceph/ceph:v19, name=awesome_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:42:25 compute-0 podman[92481]: 2026-01-26 09:42:25.182091643 +0000 UTC m=+1.136440805 container create 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:42:25 compute-0 sudo[92515]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:25 compute-0 systemd[1]: libpod-conmon-6dd0c564de1c28e9ca12f02edcce321e91fe42fcc9b987d33f44e4b496fc421b.scope: Deactivated successfully.
Jan 26 09:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51986ee8e92485b242aae3a1338ba7fcf09f06e9b210689232bc8e53e2a66e03/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:25 compute-0 podman[92481]: 2026-01-26 09:42:25.23806566 +0000 UTC m=+1.192414872 container init 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:42:25 compute-0 podman[92481]: 2026-01-26 09:42:25.245003989 +0000 UTC m=+1.199353161 container start 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:42:25 compute-0 bash[92481]: 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.255Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.255Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.256Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.256Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.256Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.256Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 26 09:42:25 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=arp
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=bcache
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=bonding
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=cpu
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=dmi
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=edac
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=entropy
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=filefd
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=netclass
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=netdev
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=netstat
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=nfs
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=nvme
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=os
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=pressure
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=rapl
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=selinux
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=softnet
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=stat
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=textfile
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=time
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=uname
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=xfs
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.257Z caller=node_exporter.go:117 level=info collector=zfs
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.258Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[92645]: ts=2026-01-26T09:42:25.258Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 26 09:42:25 compute-0 sudo[92241]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:25 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 26 09:42:25 compute-0 systemd[1]: session-34.scope: Consumed 5.004s CPU time.
Jan 26 09:42:25 compute-0 systemd-logind[787]: Removed session 34.
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'devicehealth'
Jan 26 09:42:25 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Jan 26 09:42:25 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:25.657+0000 7f3c67187140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   from numpy import show_config as show_numpy_config
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:25.835+0000 7f3c67187140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'influx'
Jan 26 09:42:25 compute-0 ceph-mon[74456]: 4.e scrub starts
Jan 26 09:42:25 compute-0 ceph-mon[74456]: 4.e scrub ok
Jan 26 09:42:25 compute-0 ceph-mon[74456]: 4.14 scrub starts
Jan 26 09:42:25 compute-0 ceph-mon[74456]: 4.14 scrub ok
Jan 26 09:42:25 compute-0 ceph-mon[74456]: 6.1d scrub starts
Jan 26 09:42:25 compute-0 ceph-mon[74456]: 6.1d scrub ok
Jan 26 09:42:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/974467440' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 26 09:42:25 compute-0 ceph-mon[74456]: mgrmap e19: compute-0.zllcia(active, since 7s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:25.912+0000 7f3c67187140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'insights'
Jan 26 09:42:25 compute-0 python3[92729]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:42:25 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'iostat'
Jan 26 09:42:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:26.049+0000 7f3c67187140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:42:26 compute-0 ceph-mgr[74755]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:42:26 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'k8sevents'
Jan 26 09:42:26 compute-0 python3[92800]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420545.6760287-37470-46540894793702/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:42:26 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'localpool'
Jan 26 09:42:26 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 09:42:26 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 26 09:42:26 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 26 09:42:26 compute-0 sudo[92848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mygrcszneqwehbxzkbspsxehktwhpcyv ; /usr/bin/python3'
Jan 26 09:42:26 compute-0 sudo[92848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:26 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mirroring'
Jan 26 09:42:26 compute-0 python3[92850]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:26 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'nfs'
Jan 26 09:42:26 compute-0 podman[92851]: 2026-01-26 09:42:26.838206366 +0000 UTC m=+0.054237200 container create 47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968 (image=quay.io/ceph/ceph:v19, name=friendly_shirley, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:26 compute-0 systemd[1]: Started libpod-conmon-47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968.scope.
Jan 26 09:42:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccaa957b5ceb3ca6c69f9b8974b6510ff6d1a286b990c4a77470ab41afb434a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccaa957b5ceb3ca6c69f9b8974b6510ff6d1a286b990c4a77470ab41afb434a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccaa957b5ceb3ca6c69f9b8974b6510ff6d1a286b990c4a77470ab41afb434a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:26 compute-0 podman[92851]: 2026-01-26 09:42:26.814822999 +0000 UTC m=+0.030853883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:26 compute-0 podman[92851]: 2026-01-26 09:42:26.917721944 +0000 UTC m=+0.133752878 container init 47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968 (image=quay.io/ceph/ceph:v19, name=friendly_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:42:26 compute-0 podman[92851]: 2026-01-26 09:42:26.930789821 +0000 UTC m=+0.146820685 container start 47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968 (image=quay.io/ceph/ceph:v19, name=friendly_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:26 compute-0 ceph-mon[74456]: 6.e scrub starts
Jan 26 09:42:26 compute-0 ceph-mon[74456]: 6.e scrub ok
Jan 26 09:42:26 compute-0 ceph-mon[74456]: 2.12 scrub starts
Jan 26 09:42:26 compute-0 ceph-mon[74456]: 2.12 scrub ok
Jan 26 09:42:26 compute-0 ceph-mon[74456]: 5.19 deep-scrub starts
Jan 26 09:42:26 compute-0 ceph-mon[74456]: 5.19 deep-scrub ok
Jan 26 09:42:26 compute-0 podman[92851]: 2026-01-26 09:42:26.934509591 +0000 UTC m=+0.150540475 container attach 47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968 (image=quay.io/ceph/ceph:v19, name=friendly_shirley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:27.056+0000 7f3c67187140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'orchestrator'
Jan 26 09:42:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:27.308+0000 7f3c67187140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 09:42:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:27.392+0000 7f3c67187140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_support'
Jan 26 09:42:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:27.466+0000 7f3c67187140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 09:42:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:27.559+0000 7f3c67187140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'progress'
Jan 26 09:42:27 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 26 09:42:27 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 26 09:42:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:27.637+0000 7f3c67187140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:42:27 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'prometheus'
Jan 26 09:42:27 compute-0 ceph-mon[74456]: 6.d deep-scrub starts
Jan 26 09:42:27 compute-0 ceph-mon[74456]: 6.d deep-scrub ok
Jan 26 09:42:27 compute-0 ceph-mon[74456]: 6.17 scrub starts
Jan 26 09:42:27 compute-0 ceph-mon[74456]: 6.17 scrub ok
Jan 26 09:42:27 compute-0 ceph-mon[74456]: 5.17 scrub starts
Jan 26 09:42:27 compute-0 ceph-mon[74456]: 5.17 scrub ok
Jan 26 09:42:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:28.026+0000 7f3c67187140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:42:28 compute-0 ceph-mgr[74755]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:42:28 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rbd_support'
Jan 26 09:42:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:28.123+0000 7f3c67187140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:42:28 compute-0 ceph-mgr[74755]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:42:28 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'restful'
Jan 26 09:42:28 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rgw'
Jan 26 09:42:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:28.604+0000 7f3c67187140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:42:28 compute-0 ceph-mgr[74755]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:42:28 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rook'
Jan 26 09:42:28 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 26 09:42:28 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 26 09:42:28 compute-0 ceph-mon[74456]: 6.3 scrub starts
Jan 26 09:42:28 compute-0 ceph-mon[74456]: 6.3 scrub ok
Jan 26 09:42:28 compute-0 ceph-mon[74456]: 3.12 scrub starts
Jan 26 09:42:28 compute-0 ceph-mon[74456]: 3.12 scrub ok
Jan 26 09:42:28 compute-0 ceph-mon[74456]: 7.16 scrub starts
Jan 26 09:42:28 compute-0 ceph-mon[74456]: 7.16 scrub ok
Jan 26 09:42:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:29.184+0000 7f3c67187140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'selftest'
Jan 26 09:42:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:29.282+0000 7f3c67187140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'snap_schedule'
Jan 26 09:42:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:29.380+0000 7f3c67187140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'stats'
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'status'
Jan 26 09:42:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:29.540+0000 7f3c67187140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telegraf'
Jan 26 09:42:29 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 26 09:42:29 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 26 09:42:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:29.620+0000 7f3c67187140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telemetry'
Jan 26 09:42:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:29.790+0000 7f3c67187140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:42:29 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 09:42:29 compute-0 ceph-mon[74456]: 6.7 scrub starts
Jan 26 09:42:29 compute-0 ceph-mon[74456]: 6.7 scrub ok
Jan 26 09:42:29 compute-0 ceph-mon[74456]: 2.10 scrub starts
Jan 26 09:42:29 compute-0 ceph-mon[74456]: 2.10 scrub ok
Jan 26 09:42:29 compute-0 ceph-mon[74456]: 7.1b scrub starts
Jan 26 09:42:29 compute-0 ceph-mon[74456]: 7.1b scrub ok
Jan 26 09:42:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:30.021+0000 7f3c67187140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'volumes'
Jan 26 09:42:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:30.295+0000 7f3c67187140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'zabbix'
Jan 26 09:42:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:30.371+0000 7f3c67187140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zllcia restarted
Jan 26 09:42:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zllcia
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: ms_deliver_dispatch: unhandled message 0x56130611b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  1: '-n'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  2: 'mgr.compute-0.zllcia'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  3: '-f'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  4: '--setuser'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  5: 'ceph'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  6: '--setgroup'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  7: 'ceph'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  8: '--default-log-to-file=false'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  9: '--default-log-to-journald=true'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr respawn  exe_path /proc/self/exe
Jan 26 09:42:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.zllcia(active, starting, since 0.0272396s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu started
Jan 26 09:42:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setuser ceph since I am not root
Jan 26 09:42:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setgroup ceph since I am not root
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: pidfile_write: ignore empty --pid-file
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'alerts'
Jan 26 09:42:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:30.592+0000 7f6652c85140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'balancer'
Jan 26 09:42:30 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 26 09:42:30 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 26 09:42:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:30.670+0000 7f6652c85140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:42:30 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'cephadm'
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti restarted
Jan 26 09:42:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti started
Jan 26 09:42:30 compute-0 ceph-mon[74456]: 6.2 deep-scrub starts
Jan 26 09:42:30 compute-0 ceph-mon[74456]: 6.2 deep-scrub ok
Jan 26 09:42:30 compute-0 ceph-mon[74456]: 2.13 scrub starts
Jan 26 09:42:30 compute-0 ceph-mon[74456]: 2.13 scrub ok
Jan 26 09:42:30 compute-0 ceph-mon[74456]: 5.14 scrub starts
Jan 26 09:42:30 compute-0 ceph-mon[74456]: 5.14 scrub ok
Jan 26 09:42:30 compute-0 ceph-mon[74456]: Active manager daemon compute-0.zllcia restarted
Jan 26 09:42:30 compute-0 ceph-mon[74456]: Activating manager daemon compute-0.zllcia
Jan 26 09:42:30 compute-0 ceph-mon[74456]: osdmap e55: 3 total, 3 up, 3 in
Jan 26 09:42:30 compute-0 ceph-mon[74456]: mgrmap e20: compute-0.zllcia(active, starting, since 0.0272396s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:30 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:42:30 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu started
Jan 26 09:42:30 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti restarted
Jan 26 09:42:30 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti started
Jan 26 09:42:31 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'crash'
Jan 26 09:42:31 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.zllcia(active, starting, since 1.05323s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:31.463+0000 7f6652c85140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:42:31 compute-0 ceph-mgr[74755]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:42:31 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'dashboard'
Jan 26 09:42:31 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Jan 26 09:42:31 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Jan 26 09:42:31 compute-0 ceph-mon[74456]: 6.8 scrub starts
Jan 26 09:42:31 compute-0 ceph-mon[74456]: 6.8 scrub ok
Jan 26 09:42:31 compute-0 ceph-mon[74456]: 6.1c scrub starts
Jan 26 09:42:31 compute-0 ceph-mon[74456]: 6.1c scrub ok
Jan 26 09:42:31 compute-0 ceph-mon[74456]: 7.10 scrub starts
Jan 26 09:42:31 compute-0 ceph-mon[74456]: 7.10 scrub ok
Jan 26 09:42:31 compute-0 ceph-mon[74456]: mgrmap e21: compute-0.zllcia(active, starting, since 1.05323s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'devicehealth'
Jan 26 09:42:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:32.116+0000 7f6652c85140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 09:42:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 09:42:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 09:42:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   from numpy import show_config as show_numpy_config
Jan 26 09:42:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:32.284+0000 7f6652c85140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'influx'
Jan 26 09:42:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:32.363+0000 7f6652c85140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'insights'
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'iostat'
Jan 26 09:42:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:32.501+0000 7f6652c85140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'k8sevents'
Jan 26 09:42:32 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 26 09:42:32 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'localpool'
Jan 26 09:42:32 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 09:42:32 compute-0 ceph-mon[74456]: 3.d scrub starts
Jan 26 09:42:32 compute-0 ceph-mon[74456]: 3.d scrub ok
Jan 26 09:42:32 compute-0 ceph-mon[74456]: 4.8 scrub starts
Jan 26 09:42:32 compute-0 ceph-mon[74456]: 4.8 scrub ok
Jan 26 09:42:32 compute-0 ceph-mon[74456]: 3.17 deep-scrub starts
Jan 26 09:42:32 compute-0 ceph-mon[74456]: 3.17 deep-scrub ok
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mirroring'
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'nfs'
Jan 26 09:42:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:33.513+0000 7f6652c85140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'orchestrator'
Jan 26 09:42:33 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Jan 26 09:42:33 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Jan 26 09:42:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:33.732+0000 7f6652c85140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 09:42:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:33.827+0000 7f6652c85140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_support'
Jan 26 09:42:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:33.896+0000 7f6652c85140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 09:42:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:33.975+0000 7f6652c85140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:42:33 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'progress'
Jan 26 09:42:33 compute-0 ceph-mon[74456]: 4.c deep-scrub starts
Jan 26 09:42:33 compute-0 ceph-mon[74456]: 4.c deep-scrub ok
Jan 26 09:42:33 compute-0 ceph-mon[74456]: 7.14 deep-scrub starts
Jan 26 09:42:33 compute-0 ceph-mon[74456]: 7.14 deep-scrub ok
Jan 26 09:42:33 compute-0 ceph-mon[74456]: 7.13 scrub starts
Jan 26 09:42:33 compute-0 ceph-mon[74456]: 7.13 scrub ok
Jan 26 09:42:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:34.049+0000 7f6652c85140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'prometheus'
Jan 26 09:42:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:34.408+0000 7f6652c85140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rbd_support'
Jan 26 09:42:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:34.500+0000 7f6652c85140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'restful'
Jan 26 09:42:34 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Jan 26 09:42:34 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rgw'
Jan 26 09:42:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:34.909+0000 7f6652c85140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:42:34 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rook'
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 6.19 scrub starts
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 6.19 scrub ok
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 7.a scrub starts
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 7.a scrub ok
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 5.c deep-scrub starts
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 5.c deep-scrub ok
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 3.f deep-scrub starts
Jan 26 09:42:35 compute-0 ceph-mon[74456]: 3.f deep-scrub ok
Jan 26 09:42:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:35.450+0000 7f6652c85140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'selftest'
Jan 26 09:42:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:35.523+0000 7f6652c85140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'snap_schedule'
Jan 26 09:42:35 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 26 09:42:35 compute-0 systemd[75791]: Activating special unit Exit the Session...
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped target Main User Target.
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped target Basic System.
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped target Paths.
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped target Sockets.
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped target Timers.
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 26 09:42:35 compute-0 systemd[75791]: Closed D-Bus User Message Bus Socket.
Jan 26 09:42:35 compute-0 systemd[75791]: Stopped Create User's Volatile Files and Directories.
Jan 26 09:42:35 compute-0 systemd[75791]: Removed slice User Application Slice.
Jan 26 09:42:35 compute-0 systemd[75791]: Reached target Shutdown.
Jan 26 09:42:35 compute-0 systemd[75791]: Finished Exit the Session.
Jan 26 09:42:35 compute-0 systemd[75791]: Reached target Exit the Session.
Jan 26 09:42:35 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 26 09:42:35 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 26 09:42:35 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 26 09:42:35 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 26 09:42:35 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 26 09:42:35 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 26 09:42:35 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 26 09:42:35 compute-0 systemd[1]: user-42477.slice: Consumed 36.733s CPU time.
Jan 26 09:42:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:35.615+0000 7f6652c85140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'stats'
Jan 26 09:42:35 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 26 09:42:35 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'status'
Jan 26 09:42:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:35.785+0000 7f6652c85140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telegraf'
Jan 26 09:42:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:35.868+0000 7f6652c85140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:42:35 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telemetry'
Jan 26 09:42:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:36.038+0000 7f6652c85140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 09:42:36 compute-0 ceph-mon[74456]: 2.d scrub starts
Jan 26 09:42:36 compute-0 ceph-mon[74456]: 2.d scrub ok
Jan 26 09:42:36 compute-0 ceph-mon[74456]: 7.e deep-scrub starts
Jan 26 09:42:36 compute-0 ceph-mon[74456]: 7.e deep-scrub ok
Jan 26 09:42:36 compute-0 ceph-mon[74456]: 4.d scrub starts
Jan 26 09:42:36 compute-0 ceph-mon[74456]: 4.d scrub ok
Jan 26 09:42:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:36.274+0000 7f6652c85140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'volumes'
Jan 26 09:42:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:36.550+0000 7f6652c85140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'zabbix'
Jan 26 09:42:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:42:36.655+0000 7f6652c85140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zllcia restarted
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zllcia
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: ms_deliver_dispatch: unhandled message 0x55f6f0157860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr handle_mgr_map Activating!
Jan 26 09:42:36 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.zllcia(active, starting, since 0.0311037s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr handle_mgr_map I am now activating
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e1 all = 1
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: balancer
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Starting
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Manager daemon compute-0.zllcia is now available
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:42:36
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: cephadm
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: crash
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: dashboard
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: devicehealth
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [dashboard INFO sso] Loading SSO DB version=1
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Starting
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: iostat
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: nfs
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: orchestrator
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: pg_autoscaler
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: progress
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [progress INFO root] Loading...
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f65d0f91520>, <progress.module.GhostEvent object at 0x7f65d0f91760>, <progress.module.GhostEvent object at 0x7f65d0f91790>, <progress.module.GhostEvent object at 0x7f65d0f917c0>, <progress.module.GhostEvent object at 0x7f65d0f917f0>, <progress.module.GhostEvent object at 0x7f65d0f91820>, <progress.module.GhostEvent object at 0x7f65d0f91850>, <progress.module.GhostEvent object at 0x7f65d0f91880>, <progress.module.GhostEvent object at 0x7f65d0f918b0>, <progress.module.GhostEvent object at 0x7f65d0f918e0>, <progress.module.GhostEvent object at 0x7f65d0f91910>, <progress.module.GhostEvent object at 0x7f65d0f91940>] historic events
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded OSDMap, ready.
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] recovery thread starting
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] starting setup
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: rbd_support
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: restful
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [restful INFO root] server_addr: :: server_port: 8003
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: status
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: telemetry
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [restful WARNING root] server not running: no certificate configured
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] PerfHandler: starting
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: volumes
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TaskHandler: starting
Jan 26 09:42:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"} v 0)
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 26 09:42:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] setup complete
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:42:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu started
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 26 09:42:37 compute-0 ceph-mon[74456]: 4.15 scrub starts
Jan 26 09:42:37 compute-0 ceph-mon[74456]: 4.15 scrub ok
Jan 26 09:42:37 compute-0 ceph-mon[74456]: 7.f scrub starts
Jan 26 09:42:37 compute-0 ceph-mon[74456]: 7.f scrub ok
Jan 26 09:42:37 compute-0 ceph-mon[74456]: 5.9 deep-scrub starts
Jan 26 09:42:37 compute-0 ceph-mon[74456]: 5.9 deep-scrub ok
Jan 26 09:42:37 compute-0 ceph-mon[74456]: Active manager daemon compute-0.zllcia restarted
Jan 26 09:42:37 compute-0 ceph-mon[74456]: Activating manager daemon compute-0.zllcia
Jan 26 09:42:37 compute-0 ceph-mon[74456]: osdmap e56: 3 total, 3 up, 3 in
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mgrmap e22: compute-0.zllcia(active, starting, since 0.0311037s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: Manager daemon compute-0.zllcia is now available
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:42:37 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu started
Jan 26 09:42:37 compute-0 sshd-session[93038]: Accepted publickey for ceph-admin from 192.168.122.100 port 54002 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti restarted
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti started
Jan 26 09:42:37 compute-0 systemd-logind[787]: New session 35 of user ceph-admin.
Jan 26 09:42:37 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 26 09:42:37 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 26 09:42:37 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 26 09:42:37 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 26 09:42:37 compute-0 systemd[93053]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.module] Engine started.
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:37 compute-0 systemd[93053]: Queued start job for default target Main User Target.
Jan 26 09:42:37 compute-0 systemd[93053]: Created slice User Application Slice.
Jan 26 09:42:37 compute-0 systemd[93053]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 09:42:37 compute-0 systemd[93053]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 09:42:37 compute-0 systemd[93053]: Reached target Paths.
Jan 26 09:42:37 compute-0 systemd[93053]: Reached target Timers.
Jan 26 09:42:37 compute-0 systemd[93053]: Starting D-Bus User Message Bus Socket...
Jan 26 09:42:37 compute-0 systemd[93053]: Starting Create User's Volatile Files and Directories...
Jan 26 09:42:37 compute-0 systemd[93053]: Listening on D-Bus User Message Bus Socket.
Jan 26 09:42:37 compute-0 systemd[93053]: Reached target Sockets.
Jan 26 09:42:37 compute-0 systemd[93053]: Finished Create User's Volatile Files and Directories.
Jan 26 09:42:37 compute-0 systemd[93053]: Reached target Basic System.
Jan 26 09:42:37 compute-0 systemd[93053]: Reached target Main User Target.
Jan 26 09:42:37 compute-0 systemd[93053]: Startup finished in 116ms.
Jan 26 09:42:37 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 26 09:42:37 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Jan 26 09:42:37 compute-0 sshd-session[93038]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:42:37 compute-0 sudo[93069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:37 compute-0 sudo[93069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:37 compute-0 sudo[93069]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:37 compute-0 sudo[93094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:42:37 compute-0 sudo[93094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:37 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.zllcia(active, since 1.05181s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 26 09:42:37 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 26 09:42:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0[74452]: 2026-01-26T09:42:37.722+0000 7f3501f83640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e2 new map
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-26T09:42:37:723366+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:42:37.723319+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 26 09:42:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 26 09:42:37 compute-0 systemd[1]: libpod-47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968.scope: Deactivated successfully.
Jan 26 09:42:37 compute-0 podman[92851]: 2026-01-26 09:42:37.778355672 +0000 UTC m=+10.994386526 container died 47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968 (image=quay.io/ceph/ceph:v19, name=friendly_shirley, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 09:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ccaa957b5ceb3ca6c69f9b8974b6510ff6d1a286b990c4a77470ab41afb434a-merged.mount: Deactivated successfully.
Jan 26 09:42:37 compute-0 podman[92851]: 2026-01-26 09:42:37.818629101 +0000 UTC m=+11.034659935 container remove 47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968 (image=quay.io/ceph/ceph:v19, name=friendly_shirley, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:37 compute-0 systemd[1]: libpod-conmon-47876ef175b26e6d15b44bf2444bcf2a7df301b9ac45b869080413d5973c4968.scope: Deactivated successfully.
Jan 26 09:42:37 compute-0 sudo[92848]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:37] ENGINE Bus STARTING
Jan 26 09:42:37 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:37] ENGINE Bus STARTING
Jan 26 09:42:38 compute-0 sudo[93236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdxoifyrlaibjfchywrjhzlaxyspgezm ; /usr/bin/python3'
Jan 26 09:42:38 compute-0 sudo[93236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:38] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:38] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:38] ENGINE Client ('192.168.122.100', 37614) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:38] ENGINE Client ('192.168.122.100', 37614) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:42:38 compute-0 podman[93239]: 2026-01-26 09:42:38.095154719 +0000 UTC m=+0.060580512 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:38] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:38] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:42:38] ENGINE Bus STARTED
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:42:38] ENGINE Bus STARTED
Jan 26 09:42:38 compute-0 python3[93247]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:38 compute-0 podman[93239]: 2026-01-26 09:42:38.189615975 +0000 UTC m=+0.155041678 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: 2.f scrub starts
Jan 26 09:42:38 compute-0 ceph-mon[74456]: 2.f scrub ok
Jan 26 09:42:38 compute-0 ceph-mon[74456]: 3.7 deep-scrub starts
Jan 26 09:42:38 compute-0 ceph-mon[74456]: 3.7 deep-scrub ok
Jan 26 09:42:38 compute-0 ceph-mon[74456]: 3.10 deep-scrub starts
Jan 26 09:42:38 compute-0 ceph-mon[74456]: 3.10 deep-scrub ok
Jan 26 09:42:38 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti restarted
Jan 26 09:42:38 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti started
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mgrmap e23: compute-0.zllcia(active, since 1.05181s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:38 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 26 09:42:38 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 26 09:42:38 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 26 09:42:38 compute-0 ceph-mon[74456]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 26 09:42:38 compute-0 ceph-mon[74456]: osdmap e57: 3 total, 3 up, 3 in
Jan 26 09:42:38 compute-0 ceph-mon[74456]: fsmap cephfs:0
Jan 26 09:42:38 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 podman[93272]: 2026-01-26 09:42:38.256495469 +0000 UTC m=+0.051625599 container create fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd (image=quay.io/ceph/ceph:v19, name=affectionate_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Jan 26 09:42:38 compute-0 systemd[1]: Started libpod-conmon-fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd.scope.
Jan 26 09:42:38 compute-0 podman[93272]: 2026-01-26 09:42:38.233638615 +0000 UTC m=+0.028768765 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36561914b083765075807489525d1d86936c969cdaa4bd6b8f732bcb6b82ddb0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36561914b083765075807489525d1d86936c969cdaa4bd6b8f732bcb6b82ddb0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36561914b083765075807489525d1d86936c969cdaa4bd6b8f732bcb6b82ddb0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:38 compute-0 podman[93272]: 2026-01-26 09:42:38.36365904 +0000 UTC m=+0.158789190 container init fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd (image=quay.io/ceph/ceph:v19, name=affectionate_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:42:38 compute-0 podman[93272]: 2026-01-26 09:42:38.375644487 +0000 UTC m=+0.170774617 container start fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd (image=quay.io/ceph/ceph:v19, name=affectionate_meninsky, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:38 compute-0 podman[93272]: 2026-01-26 09:42:38.378896586 +0000 UTC m=+0.174026706 container attach fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd (image=quay.io/ceph/ceph:v19, name=affectionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 26 09:42:38 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 affectionate_meninsky[93319]: Scheduled mds.cephfs update...
Jan 26 09:42:38 compute-0 podman[93409]: 2026-01-26 09:42:38.743148187 +0000 UTC m=+0.069261389 container exec 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:42:38 compute-0 systemd[1]: libpod-fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd.scope: Deactivated successfully.
Jan 26 09:42:38 compute-0 podman[93272]: 2026-01-26 09:42:38.761997521 +0000 UTC m=+0.557127661 container died fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd (image=quay.io/ceph/ceph:v19, name=affectionate_meninsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:42:38 compute-0 podman[93409]: 2026-01-26 09:42:38.779266372 +0000 UTC m=+0.105379544 container exec_died 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-36561914b083765075807489525d1d86936c969cdaa4bd6b8f732bcb6b82ddb0-merged.mount: Deactivated successfully.
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:38 compute-0 podman[93272]: 2026-01-26 09:42:38.841514899 +0000 UTC m=+0.636645049 container remove fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd (image=quay.io/ceph/ceph:v19, name=affectionate_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:42:38 compute-0 systemd[1]: libpod-conmon-fdf31c0e8aef834d13eed283844c6a95313abe32596f7da07e3a140e7f4967cd.scope: Deactivated successfully.
Jan 26 09:42:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:38 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 09:42:38 compute-0 sudo[93094]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:38 compute-0 sudo[93236]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:38 compute-0 sudo[93471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:38 compute-0 sudo[93471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:38 compute-0 sudo[93471]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:39 compute-0 sudo[93496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:42:39 compute-0 sudo[93542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epkutyjvnqkyayonagvxkvkhqzpwpxep ; /usr/bin/python3'
Jan 26 09:42:39 compute-0 sudo[93496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:39 compute-0 sudo[93542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:39 compute-0 python3[93546]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:39 compute-0 ceph-mon[74456]: 7.5 scrub starts
Jan 26 09:42:39 compute-0 ceph-mon[74456]: 7.5 scrub ok
Jan 26 09:42:39 compute-0 ceph-mon[74456]: 7.3 scrub starts
Jan 26 09:42:39 compute-0 ceph-mon[74456]: 7.3 scrub ok
Jan 26 09:42:39 compute-0 ceph-mon[74456]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:39 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:37] ENGINE Bus STARTING
Jan 26 09:42:39 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:38] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:42:39 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:38] ENGINE Client ('192.168.122.100', 37614) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:42:39 compute-0 ceph-mon[74456]: 3.c scrub starts
Jan 26 09:42:39 compute-0 ceph-mon[74456]: 3.c scrub ok
Jan 26 09:42:39 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:38] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:42:39 compute-0 ceph-mon[74456]: [26/Jan/2026:09:42:38] ENGINE Bus STARTED
Jan 26 09:42:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 podman[93547]: 2026-01-26 09:42:39.260526073 +0000 UTC m=+0.043650251 container create 121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c (image=quay.io/ceph/ceph:v19, name=wizardly_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:42:39 compute-0 systemd[1]: Started libpod-conmon-121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c.scope.
Jan 26 09:42:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0eeb38a8e097a62f96f40e8c113462203186fa91895a1b4b1e18bf748a36aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0eeb38a8e097a62f96f40e8c113462203186fa91895a1b4b1e18bf748a36aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0eeb38a8e097a62f96f40e8c113462203186fa91895a1b4b1e18bf748a36aa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:39 compute-0 podman[93547]: 2026-01-26 09:42:39.23951069 +0000 UTC m=+0.022634878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:39 compute-0 podman[93547]: 2026-01-26 09:42:39.354573397 +0000 UTC m=+0.137697585 container init 121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c (image=quay.io/ceph/ceph:v19, name=wizardly_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 09:42:39 compute-0 podman[93547]: 2026-01-26 09:42:39.362331339 +0000 UTC m=+0.145455527 container start 121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c (image=quay.io/ceph/ceph:v19, name=wizardly_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:39 compute-0 podman[93547]: 2026-01-26 09:42:39.366453371 +0000 UTC m=+0.149577579 container attach 121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c (image=quay.io/ceph/ceph:v19, name=wizardly_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:39 compute-0 sudo[93496]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.zllcia(active, since 2s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:39 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:39 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:39 compute-0 sudo[93617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:39 compute-0 sudo[93617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:39 compute-0 sudo[93617]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:39 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14490 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 26 09:42:39 compute-0 sudo[93642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 26 09:42:39 compute-0 sudo[93642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 09:42:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:40 compute-0 sudo[93642]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:42:40 compute-0 sudo[93689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:42:40 compute-0 sudo[93689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93689]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 sudo[93714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:42:40 compute-0 sudo[93714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93714]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 sudo[93739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:40 compute-0 sudo[93739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93739]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 ceph-mon[74456]: 4.1 scrub starts
Jan 26 09:42:40 compute-0 ceph-mon[74456]: 4.1 scrub ok
Jan 26 09:42:40 compute-0 ceph-mon[74456]: 5.3 scrub starts
Jan 26 09:42:40 compute-0 ceph-mon[74456]: 5.3 scrub ok
Jan 26 09:42:40 compute-0 ceph-mon[74456]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:40 compute-0 ceph-mon[74456]: 5.15 scrub starts
Jan 26 09:42:40 compute-0 ceph-mon[74456]: 5.15 scrub ok
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mgrmap e24: compute-0.zllcia(active, since 2s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:40 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:42:40 compute-0 sudo[93764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:40 compute-0 sudo[93764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93764]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 sudo[93789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:40 compute-0 sudo[93789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93789]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:40 compute-0 sudo[93837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:40 compute-0 sudo[93837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93837]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Jan 26 09:42:40 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Jan 26 09:42:40 compute-0 sudo[93862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:42:40 compute-0 sudo[93862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93862]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 26 09:42:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Jan 26 09:42:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 26 09:42:40 compute-0 sudo[93887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 26 09:42:40 compute-0 sudo[93887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93887]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:40 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:40 compute-0 sudo[93912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:40 compute-0 sudo[93912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93912]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 sudo[93937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:40 compute-0 sudo[93937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93937]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 sudo[93962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:40 compute-0 sudo[93962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93962]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:40 compute-0 sudo[93987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:40 compute-0 sudo[93987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:40 compute-0 sudo[93987]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 sudo[94012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:41 compute-0 sudo[94012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94012]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[94060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:41 compute-0 sudo[94060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94060]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[94085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:42:41 compute-0 sudo[94085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 sudo[94085]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[94110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:41 compute-0 sudo[94110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94110]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 sudo[94135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:42:41 compute-0 sudo[94135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94135]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[94160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:42:41 compute-0 sudo[94160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94160]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[94185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:41 compute-0 sudo[94185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.zllcia(active, since 4s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:41 compute-0 sudo[94185]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 ceph-mon[74456]: 2.c scrub starts
Jan 26 09:42:41 compute-0 ceph-mon[74456]: 2.c scrub ok
Jan 26 09:42:41 compute-0 ceph-mon[74456]: 2.6 scrub starts
Jan 26 09:42:41 compute-0 ceph-mon[74456]: 2.6 scrub ok
Jan 26 09:42:41 compute-0 ceph-mon[74456]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 09:42:41 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:42:41 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:42:41 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:42:41 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:42:41 compute-0 ceph-mon[74456]: 3.14 scrub starts
Jan 26 09:42:41 compute-0 ceph-mon[74456]: 3.14 scrub ok
Jan 26 09:42:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 26 09:42:41 compute-0 ceph-mon[74456]: osdmap e58: 3 total, 3 up, 3 in
Jan 26 09:42:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 26 09:42:41 compute-0 sudo[94210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:41 compute-0 sudo[94210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94210]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[94236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:41 compute-0 sudo[94236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94236]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 sudo[94284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:41 compute-0 sudo[94284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94284]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 26 09:42:41 compute-0 sudo[94309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:42:41 compute-0 sudo[94309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 26 09:42:41 compute-0 sudo[94309]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 26 09:42:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 26 09:42:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 26 09:42:41 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 26 09:42:41 compute-0 sudo[94334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 sudo[94334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94334]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:42:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:41 compute-0 systemd[1]: libpod-121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c.scope: Deactivated successfully.
Jan 26 09:42:41 compute-0 podman[93547]: 2026-01-26 09:42:41.816276594 +0000 UTC m=+2.599400792 container died 121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c (image=quay.io/ceph/ceph:v19, name=wizardly_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0eeb38a8e097a62f96f40e8c113462203186fa91895a1b4b1e18bf748a36aa-merged.mount: Deactivated successfully.
Jan 26 09:42:41 compute-0 podman[93547]: 2026-01-26 09:42:41.86933393 +0000 UTC m=+2.652458088 container remove 121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c (image=quay.io/ceph/ceph:v19, name=wizardly_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:42:41 compute-0 sudo[94369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:41 compute-0 sudo[94369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 systemd[1]: libpod-conmon-121dabcbf10b185378894b270874175cdb8b350f23478a448d2359ae6df9093c.scope: Deactivated successfully.
Jan 26 09:42:41 compute-0 sudo[94369]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[93542]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:41 compute-0 sudo[94406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:42:41 compute-0 sudo[94406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:41 compute-0 sudo[94406]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:42 compute-0 sudo[94431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:42 compute-0 sudo[94431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:42 compute-0 sudo[94431]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:42 compute-0 sudo[94456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:42 compute-0 sudo[94456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:42 compute-0 sudo[94456]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 sudo[94481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:42 compute-0 sudo[94481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:42 compute-0 sudo[94481]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:42 compute-0 sudo[94529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:42 compute-0 sudo[94529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:42 compute-0 sudo[94529]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:42 compute-0 sudo[94554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:42:42 compute-0 sudo[94554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:42 compute-0 sudo[94554]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:42 compute-0 sudo[94579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:42 compute-0 sudo[94579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:42 compute-0 sudo[94579]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 4.3 scrub starts
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 4.3 scrub ok
Jan 26 09:42:42 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:42 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 3.6 deep-scrub starts
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 3.6 deep-scrub ok
Jan 26 09:42:42 compute-0 ceph-mon[74456]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:42 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:42:42 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 4.13 scrub starts
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 4.13 scrub ok
Jan 26 09:42:42 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:42 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mgrmap e25: compute-0.zllcia(active, since 4s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 4.2 scrub starts
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 4.2 scrub ok
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 5.5 scrub starts
Jan 26 09:42:42 compute-0 ceph-mon[74456]: 5.5 scrub ok
Jan 26 09:42:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 26 09:42:42 compute-0 ceph-mon[74456]: osdmap e59: 3 total, 3 up, 3 in
Jan 26 09:42:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:42 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 2d5e5ee5-5a33-4663-bbe1-017f1122f0a8 (Updating node-exporter deployment (+2 -> 3))
Jan 26 09:42:42 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Jan 26 09:42:42 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Jan 26 09:42:42 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 26 09:42:42 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 26 09:42:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 26 09:42:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 26 09:42:42 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 26 09:42:42 compute-0 sudo[94679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aomttzvdssxtyjfzmfammtdhrssirias ; /usr/bin/python3'
Jan 26 09:42:42 compute-0 sudo[94679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:43 compute-0 python3[94681]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 09:42:43 compute-0 sudo[94679]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:43 compute-0 sudo[94752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djwpeezaunsgjpxgfdepxskebxrvkwsy ; /usr/bin/python3'
Jan 26 09:42:43 compute-0 sudo[94752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:43 compute-0 python3[94754]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769420562.7626753-37523-95971224760557/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=e8137016e459ec15b04fac1b40fd6c611375a3cb backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:42:43 compute-0 sudo[94752]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:43 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:43 compute-0 ceph-mon[74456]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:43 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:43 compute-0 ceph-mon[74456]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:43 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:42:43 compute-0 ceph-mon[74456]: 5.16 deep-scrub starts
Jan 26 09:42:43 compute-0 ceph-mon[74456]: 5.16 deep-scrub ok
Jan 26 09:42:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:43 compute-0 ceph-mon[74456]: 2.5 scrub starts
Jan 26 09:42:43 compute-0 ceph-mon[74456]: 2.5 scrub ok
Jan 26 09:42:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:43 compute-0 ceph-mon[74456]: 3.2 scrub starts
Jan 26 09:42:43 compute-0 ceph-mon[74456]: 3.2 scrub ok
Jan 26 09:42:43 compute-0 ceph-mon[74456]: osdmap e60: 3 total, 3 up, 3 in
Jan 26 09:42:43 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Jan 26 09:42:43 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Jan 26 09:42:43 compute-0 sudo[94802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iatddahkpntbvqlsepgzawuuwhxwplcp ; /usr/bin/python3'
Jan 26 09:42:43 compute-0 sudo[94802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:43 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.zllcia(active, since 7s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:43 compute-0 python3[94804]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:44 compute-0 podman[94805]: 2026-01-26 09:42:43.995086708 +0000 UTC m=+0.028579250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:44 compute-0 podman[94805]: 2026-01-26 09:42:44.104649655 +0000 UTC m=+0.138142167 container create 2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f (image=quay.io/ceph/ceph:v19, name=exciting_sutherland, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:42:44 compute-0 systemd[1]: Started libpod-conmon-2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f.scope.
Jan 26 09:42:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd51be9c87e495996075e1d97daf194dba6459575c4b0003f53645582340ee1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd51be9c87e495996075e1d97daf194dba6459575c4b0003f53645582340ee1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:44 compute-0 podman[94805]: 2026-01-26 09:42:44.218038177 +0000 UTC m=+0.251530769 container init 2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f (image=quay.io/ceph/ceph:v19, name=exciting_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:44 compute-0 podman[94805]: 2026-01-26 09:42:44.227963638 +0000 UTC m=+0.261456150 container start 2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f (image=quay.io/ceph/ceph:v19, name=exciting_sutherland, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 09:42:44 compute-0 podman[94805]: 2026-01-26 09:42:44.241239679 +0000 UTC m=+0.274732221 container attach 2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f (image=quay.io/ceph/ceph:v19, name=exciting_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 09:42:44 compute-0 ceph-mon[74456]: Deploying daemon node-exporter.compute-1 on compute-1
Jan 26 09:42:44 compute-0 ceph-mon[74456]: pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:44 compute-0 ceph-mon[74456]: 3.13 deep-scrub starts
Jan 26 09:42:44 compute-0 ceph-mon[74456]: 3.13 deep-scrub ok
Jan 26 09:42:44 compute-0 ceph-mon[74456]: 2.b scrub starts
Jan 26 09:42:44 compute-0 ceph-mon[74456]: 2.b scrub ok
Jan 26 09:42:44 compute-0 ceph-mon[74456]: 5.a deep-scrub starts
Jan 26 09:42:44 compute-0 ceph-mon[74456]: 5.a deep-scrub ok
Jan 26 09:42:44 compute-0 ceph-mon[74456]: mgrmap e26: compute-0.zllcia(active, since 7s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:42:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 26 09:42:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/234657791' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 26 09:42:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/234657791' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 26 09:42:44 compute-0 systemd[1]: libpod-2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f.scope: Deactivated successfully.
Jan 26 09:42:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 26 09:42:44 compute-0 podman[94845]: 2026-01-26 09:42:44.721858253 +0000 UTC m=+0.023082440 container died 2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f (image=quay.io/ceph/ceph:v19, name=exciting_sutherland, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 09:42:44 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Jan 26 09:42:44 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Jan 26 09:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cd51be9c87e495996075e1d97daf194dba6459575c4b0003f53645582340ee1-merged.mount: Deactivated successfully.
Jan 26 09:42:44 compute-0 podman[94845]: 2026-01-26 09:42:44.755894491 +0000 UTC m=+0.057118648 container remove 2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f (image=quay.io/ceph/ceph:v19, name=exciting_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:42:44 compute-0 systemd[1]: libpod-conmon-2453f0654f178366b1f38ec8c72a783b449984bc68fbfde9c934f3ebbe17003f.scope: Deactivated successfully.
Jan 26 09:42:44 compute-0 sudo[94802]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:45 compute-0 sudo[94884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icgeaebhkgtdmggpeayggqrzzfrloebv ; /usr/bin/python3'
Jan 26 09:42:45 compute-0 sudo[94884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:45 compute-0 python3[94886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:42:45 compute-0 ceph-mon[74456]: 5.11 scrub starts
Jan 26 09:42:45 compute-0 ceph-mon[74456]: 5.11 scrub ok
Jan 26 09:42:45 compute-0 ceph-mon[74456]: 2.a scrub starts
Jan 26 09:42:45 compute-0 ceph-mon[74456]: 2.a scrub ok
Jan 26 09:42:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/234657791' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 26 09:42:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/234657791' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 26 09:42:45 compute-0 ceph-mon[74456]: 2.e deep-scrub starts
Jan 26 09:42:45 compute-0 ceph-mon[74456]: 2.e deep-scrub ok
Jan 26 09:42:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:42:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 26 09:42:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:45 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Jan 26 09:42:45 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Jan 26 09:42:45 compute-0 podman[94888]: 2026-01-26 09:42:45.667998489 +0000 UTC m=+0.057627182 container create f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a (image=quay.io/ceph/ceph:v19, name=exciting_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 09:42:45 compute-0 systemd[1]: Started libpod-conmon-f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a.scope.
Jan 26 09:42:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885b364583e367ec37414993c7bfe4ed0f1fb9e47093243c68f294ff6dadd4c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/885b364583e367ec37414993c7bfe4ed0f1fb9e47093243c68f294ff6dadd4c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:45 compute-0 podman[94888]: 2026-01-26 09:42:45.635846692 +0000 UTC m=+0.025475435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:45 compute-0 podman[94888]: 2026-01-26 09:42:45.735628652 +0000 UTC m=+0.125257315 container init f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a (image=quay.io/ceph/ceph:v19, name=exciting_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:42:45 compute-0 podman[94888]: 2026-01-26 09:42:45.741886723 +0000 UTC m=+0.131515376 container start f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a (image=quay.io/ceph/ceph:v19, name=exciting_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 09:42:45 compute-0 podman[94888]: 2026-01-26 09:42:45.745627826 +0000 UTC m=+0.135256499 container attach f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a (image=quay.io/ceph/ceph:v19, name=exciting_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:45 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 26 09:42:45 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 26 09:42:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 26 09:42:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2890019561' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:42:46 compute-0 exciting_brattain[94905]: 
Jan 26 09:42:46 compute-0 exciting_brattain[94905]: {"fsid":"1a70b85d-e3fd-5814-8a6a-37ea00fcae30","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":95,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":60,"num_osds":3,"num_up_osds":3,"osd_up_since":1769420501,"num_in_osds":3,"osd_in_since":1769420478,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":89288704,"bytes_avail":64322637824,"bytes_total":64411926528,"read_bytes_sec":30030,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2026-01-26T09:42:37:723366+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-01-26T09:42:19.587844+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.zllcia":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.xammti":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.oynaeu":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","24169":{"start_epoch":5,"start_stamp":"2026-01-26T09:42:18.842562+0000","gid":24169,"addr":"192.168.122.101:0/992292627","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.fbcidm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864304","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"88adcf12-6dc3-48b6-86bb-ed23fd934e78","zone_name":"default","zonegroup_id":"423841e2-30ae-45d1-92b7-7a24aa3d4488","zonegroup_name":"default"},"task_status":{}},"24172":{"start_epoch":5,"start_stamp":"2026-01-26T09:42:18.838857+0000","gid":24172,"addr":"192.168.122.102:0/1812478715","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.fgzdbm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"88adcf12-6dc3-48b6-86bb-ed23fd934e78","zone_name":"default","zonegroup_id":"423841e2-30ae-45d1-92b7-7a24aa3d4488","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"2d5e5ee5-5a33-4663-bbe1-017f1122f0a8":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 26 09:42:46 compute-0 systemd[1]: libpod-f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a.scope: Deactivated successfully.
Jan 26 09:42:46 compute-0 podman[94888]: 2026-01-26 09:42:46.182909548 +0000 UTC m=+0.572538211 container died f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a (image=quay.io/ceph/ceph:v19, name=exciting_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 09:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-885b364583e367ec37414993c7bfe4ed0f1fb9e47093243c68f294ff6dadd4c5-merged.mount: Deactivated successfully.
Jan 26 09:42:46 compute-0 podman[94888]: 2026-01-26 09:42:46.216548715 +0000 UTC m=+0.606177368 container remove f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a (image=quay.io/ceph/ceph:v19, name=exciting_brattain, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 09:42:46 compute-0 systemd[1]: libpod-conmon-f8a20649d939deed90920bcf066ea5dd1a605937ef9ae6bbaa70e97f7f686c5a.scope: Deactivated successfully.
Jan 26 09:42:46 compute-0 sudo[94884]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:46 compute-0 sudo[94967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgohrcxltppuzrvhlkfxjulonahaeeia ; /usr/bin/python3'
Jan 26 09:42:46 compute-0 sudo[94967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:46 compute-0 python3[94969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:46 compute-0 ceph-mon[74456]: pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 26 09:42:46 compute-0 ceph-mon[74456]: 5.10 scrub starts
Jan 26 09:42:46 compute-0 ceph-mon[74456]: 5.10 scrub ok
Jan 26 09:42:46 compute-0 ceph-mon[74456]: 4.19 deep-scrub starts
Jan 26 09:42:46 compute-0 ceph-mon[74456]: 4.19 deep-scrub ok
Jan 26 09:42:46 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:46 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:46 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:46 compute-0 ceph-mon[74456]: Deploying daemon node-exporter.compute-2 on compute-2
Jan 26 09:42:46 compute-0 ceph-mon[74456]: 5.1d scrub starts
Jan 26 09:42:46 compute-0 ceph-mon[74456]: 5.1d scrub ok
Jan 26 09:42:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2890019561' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:42:46 compute-0 podman[94970]: 2026-01-26 09:42:46.663055479 +0000 UTC m=+0.082402008 container create e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed (image=quay.io/ceph/ceph:v19, name=quirky_satoshi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:42:46 compute-0 sshd-session[94942]: Invalid user admin from 157.245.76.178 port 37638
Jan 26 09:42:46 compute-0 systemd[1]: Started libpod-conmon-e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed.scope.
Jan 26 09:42:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 26 09:42:46 compute-0 podman[94970]: 2026-01-26 09:42:46.636485324 +0000 UTC m=+0.055831943 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d64a3d71bbd7dba3e5eb47331e4f9b52d27ce86c66b309755d5fcda051cf97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d64a3d71bbd7dba3e5eb47331e4f9b52d27ce86c66b309755d5fcda051cf97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:46 compute-0 podman[94970]: 2026-01-26 09:42:46.754862261 +0000 UTC m=+0.174208830 container init e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed (image=quay.io/ceph/ceph:v19, name=quirky_satoshi, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:46 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 26 09:42:46 compute-0 sshd-session[94942]: Connection closed by invalid user admin 157.245.76.178 port 37638 [preauth]
Jan 26 09:42:46 compute-0 podman[94970]: 2026-01-26 09:42:46.767172307 +0000 UTC m=+0.186518866 container start e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed (image=quay.io/ceph/ceph:v19, name=quirky_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 09:42:46 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 26 09:42:46 compute-0 podman[94970]: 2026-01-26 09:42:46.772893633 +0000 UTC m=+0.192240162 container attach e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed (image=quay.io/ceph/ceph:v19, name=quirky_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 26 09:42:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 09:42:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072035321' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 09:42:47 compute-0 quirky_satoshi[94985]: 
Jan 26 09:42:47 compute-0 quirky_satoshi[94985]: {"epoch":3,"fsid":"1a70b85d-e3fd-5814-8a6a-37ea00fcae30","modified":"2026-01-26T09:41:05.675064Z","created":"2026-01-26T09:38:19.068625Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 26 09:42:47 compute-0 quirky_satoshi[94985]: dumped monmap epoch 3
Jan 26 09:42:47 compute-0 systemd[1]: libpod-e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed.scope: Deactivated successfully.
Jan 26 09:42:47 compute-0 podman[94970]: 2026-01-26 09:42:47.275171347 +0000 UTC m=+0.694517896 container died e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed (image=quay.io/ceph/ceph:v19, name=quirky_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 09:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d64a3d71bbd7dba3e5eb47331e4f9b52d27ce86c66b309755d5fcda051cf97-merged.mount: Deactivated successfully.
Jan 26 09:42:47 compute-0 podman[94970]: 2026-01-26 09:42:47.305007701 +0000 UTC m=+0.724354230 container remove e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed (image=quay.io/ceph/ceph:v19, name=quirky_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:42:47 compute-0 systemd[1]: libpod-conmon-e0f66d46ab86d16c802c3197582afcbc08532a0b854b0cfaf9c9a2eaac2201ed.scope: Deactivated successfully.
Jan 26 09:42:47 compute-0 sudo[94967]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:47 compute-0 ceph-mon[74456]: 5.1f scrub starts
Jan 26 09:42:47 compute-0 ceph-mon[74456]: 5.1f scrub ok
Jan 26 09:42:47 compute-0 ceph-mon[74456]: 4.9 scrub starts
Jan 26 09:42:47 compute-0 ceph-mon[74456]: 4.9 scrub ok
Jan 26 09:42:47 compute-0 ceph-mon[74456]: pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Jan 26 09:42:47 compute-0 ceph-mon[74456]: 5.6 scrub starts
Jan 26 09:42:47 compute-0 ceph-mon[74456]: 5.6 scrub ok
Jan 26 09:42:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4072035321' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 09:42:47 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 26 09:42:47 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 26 09:42:47 compute-0 sudo[95046]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakayhebghxlbaojkxbukghvrelwjsft ; /usr/bin/python3'
Jan 26 09:42:47 compute-0 sudo[95046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:48 compute-0 python3[95048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:48 compute-0 podman[95049]: 2026-01-26 09:42:48.049822038 +0000 UTC m=+0.020303154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:48 compute-0 podman[95049]: 2026-01-26 09:42:48.497102582 +0000 UTC m=+0.467583718 container create 1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b (image=quay.io/ceph/ceph:v19, name=gifted_haslett, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 2d5e5ee5-5a33-4663-bbe1-017f1122f0a8 (Updating node-exporter deployment (+2 -> 3))
Jan 26 09:42:48 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 2d5e5ee5-5a33-4663-bbe1-017f1122f0a8 (Updating node-exporter deployment (+2 -> 3)) in 6 seconds
Jan 26 09:42:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:42:48 compute-0 systemd[1]: Started libpod-conmon-1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b.scope.
Jan 26 09:42:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:42:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5135e0e7004e897e94ca247ffec28a61e79e28eda14cdd038543484d8ef7b6d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5135e0e7004e897e94ca247ffec28a61e79e28eda14cdd038543484d8ef7b6d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:48 compute-0 sudo[95067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:48 compute-0 sudo[95067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:48 compute-0 sudo[95067]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:48 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 26 09:42:48 compute-0 sudo[95092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:42:48 compute-0 sudo[95092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:48 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 26 09:42:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 09:42:48 compute-0 podman[95049]: 2026-01-26 09:42:48.756462593 +0000 UTC m=+0.726943789 container init 1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b (image=quay.io/ceph/ceph:v19, name=gifted_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:42:48 compute-0 ceph-mon[74456]: 3.16 deep-scrub starts
Jan 26 09:42:48 compute-0 ceph-mon[74456]: 3.16 deep-scrub ok
Jan 26 09:42:48 compute-0 ceph-mon[74456]: 6.1 scrub starts
Jan 26 09:42:48 compute-0 ceph-mon[74456]: 6.1 scrub ok
Jan 26 09:42:48 compute-0 ceph-mon[74456]: 3.19 scrub starts
Jan 26 09:42:48 compute-0 ceph-mon[74456]: 3.19 scrub ok
Jan 26 09:42:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:42:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:42:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:48 compute-0 podman[95049]: 2026-01-26 09:42:48.764010009 +0000 UTC m=+0.734491105 container start 1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b (image=quay.io/ceph/ceph:v19, name=gifted_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:42:48 compute-0 podman[95049]: 2026-01-26 09:42:48.778816433 +0000 UTC m=+0.749297539 container attach 1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b (image=quay.io/ceph/ceph:v19, name=gifted_haslett, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:49 compute-0 podman[95175]: 2026-01-26 09:42:49.07617024 +0000 UTC m=+0.070710799 container create 36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:49 compute-0 systemd[1]: Started libpod-conmon-36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23.scope.
Jan 26 09:42:49 compute-0 podman[95175]: 2026-01-26 09:42:49.023920316 +0000 UTC m=+0.018460865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:49 compute-0 podman[95175]: 2026-01-26 09:42:49.168131877 +0000 UTC m=+0.162672576 container init 36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:49 compute-0 podman[95175]: 2026-01-26 09:42:49.174883342 +0000 UTC m=+0.169423871 container start 36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:42:49 compute-0 romantic_hoover[95192]: 167 167
Jan 26 09:42:49 compute-0 systemd[1]: libpod-36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23.scope: Deactivated successfully.
Jan 26 09:42:49 compute-0 conmon[95192]: conmon 36f7685054a3b7c81ed1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23.scope/container/memory.events
Jan 26 09:42:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 26 09:42:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/113232953' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 26 09:42:49 compute-0 gifted_haslett[95064]: [client.openstack]
Jan 26 09:42:49 compute-0 gifted_haslett[95064]:         key = AQDlNXdpAAAAABAAkYdaCUlKVeiqmlhElLFrLA==
Jan 26 09:42:49 compute-0 gifted_haslett[95064]:         caps mgr = "allow *"
Jan 26 09:42:49 compute-0 gifted_haslett[95064]:         caps mon = "profile rbd"
Jan 26 09:42:49 compute-0 gifted_haslett[95064]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 26 09:42:49 compute-0 podman[95175]: 2026-01-26 09:42:49.235627357 +0000 UTC m=+0.230167986 container attach 36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 09:42:49 compute-0 podman[95175]: 2026-01-26 09:42:49.236261895 +0000 UTC m=+0.230802524 container died 36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:42:49 compute-0 systemd[1]: libpod-1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b.scope: Deactivated successfully.
Jan 26 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2780325a191edf935f6ad4cb3a730488cd8db1b08d5282076e5ad2c609e149fa-merged.mount: Deactivated successfully.
Jan 26 09:42:49 compute-0 podman[95175]: 2026-01-26 09:42:49.308531915 +0000 UTC m=+0.303072444 container remove 36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:49 compute-0 podman[95049]: 2026-01-26 09:42:49.316600565 +0000 UTC m=+1.287081661 container died 1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b (image=quay.io/ceph/ceph:v19, name=gifted_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:49 compute-0 systemd[1]: libpod-conmon-36f7685054a3b7c81ed1aae3b2f9a54083d3b79e0b71c39aa481cd4d4f87cd23.scope: Deactivated successfully.
Jan 26 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5135e0e7004e897e94ca247ffec28a61e79e28eda14cdd038543484d8ef7b6d7-merged.mount: Deactivated successfully.
Jan 26 09:42:49 compute-0 podman[95210]: 2026-01-26 09:42:49.527828914 +0000 UTC m=+0.272679175 container remove 1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b (image=quay.io/ceph/ceph:v19, name=gifted_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 09:42:49 compute-0 systemd[1]: libpod-conmon-1ff666db27a7816691451b4f124328ef0fce21ab3d63ddab134596e2ab18339b.scope: Deactivated successfully.
Jan 26 09:42:49 compute-0 sudo[95046]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:49 compute-0 podman[95231]: 2026-01-26 09:42:49.554116721 +0000 UTC m=+0.144852620 container create dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 09:42:49 compute-0 systemd[1]: Started libpod-conmon-dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a.scope.
Jan 26 09:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7112ce52f7402048dc84cc94cb3ac655d1b0bd2522a9d8534090d2d3b13fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7112ce52f7402048dc84cc94cb3ac655d1b0bd2522a9d8534090d2d3b13fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7112ce52f7402048dc84cc94cb3ac655d1b0bd2522a9d8534090d2d3b13fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7112ce52f7402048dc84cc94cb3ac655d1b0bd2522a9d8534090d2d3b13fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7112ce52f7402048dc84cc94cb3ac655d1b0bd2522a9d8534090d2d3b13fa9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:49 compute-0 podman[95231]: 2026-01-26 09:42:49.631042809 +0000 UTC m=+0.221778738 container init dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_nash, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:49 compute-0 podman[95231]: 2026-01-26 09:42:49.535311378 +0000 UTC m=+0.126047297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:49 compute-0 podman[95231]: 2026-01-26 09:42:49.646459759 +0000 UTC m=+0.237195658 container start dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_nash, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:49 compute-0 podman[95231]: 2026-01-26 09:42:49.649320277 +0000 UTC m=+0.240056176 container attach dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_nash, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:49 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 26 09:42:49 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 26 09:42:49 compute-0 ceph-mon[74456]: 5.2 scrub starts
Jan 26 09:42:49 compute-0 ceph-mon[74456]: 5.2 scrub ok
Jan 26 09:42:49 compute-0 ceph-mon[74456]: 6.1b scrub starts
Jan 26 09:42:49 compute-0 ceph-mon[74456]: 6.1b scrub ok
Jan 26 09:42:49 compute-0 ceph-mon[74456]: 3.18 scrub starts
Jan 26 09:42:49 compute-0 ceph-mon[74456]: 3.18 scrub ok
Jan 26 09:42:49 compute-0 ceph-mon[74456]: pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 09:42:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/113232953' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 26 09:42:49 compute-0 heuristic_nash[95247]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:42:49 compute-0 heuristic_nash[95247]: --> All data devices are unavailable
Jan 26 09:42:49 compute-0 systemd[1]: libpod-dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a.scope: Deactivated successfully.
Jan 26 09:42:49 compute-0 podman[95231]: 2026-01-26 09:42:49.962640949 +0000 UTC m=+0.553376848 container died dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7112ce52f7402048dc84cc94cb3ac655d1b0bd2522a9d8534090d2d3b13fa9-merged.mount: Deactivated successfully.
Jan 26 09:42:50 compute-0 podman[95231]: 2026-01-26 09:42:50.005640391 +0000 UTC m=+0.596376290 container remove dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:50 compute-0 systemd[1]: libpod-conmon-dd23a716284bd4a4d0835333ad4fe78772fb81d4246f0a3c7141f26f5510739a.scope: Deactivated successfully.
Jan 26 09:42:50 compute-0 sudo[95092]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:50 compute-0 sudo[95275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:50 compute-0 sudo[95275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:50 compute-0 sudo[95275]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:50 compute-0 sudo[95300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:42:50 compute-0 sudo[95300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Jan 26 09:42:50 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 26 09:42:50 compute-0 ceph-mon[74456]: 5.1 scrub starts
Jan 26 09:42:50 compute-0 ceph-mon[74456]: 5.1 scrub ok
Jan 26 09:42:50 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 26 09:42:50 compute-0 podman[95390]: 2026-01-26 09:42:50.983303337 +0000 UTC m=+0.041393139 container create ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_raman, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:51 compute-0 systemd[1]: Started libpod-conmon-ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758.scope.
Jan 26 09:42:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:51 compute-0 podman[95390]: 2026-01-26 09:42:51.062730982 +0000 UTC m=+0.120820814 container init ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:42:51 compute-0 podman[95390]: 2026-01-26 09:42:50.96580319 +0000 UTC m=+0.023893012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:51 compute-0 podman[95390]: 2026-01-26 09:42:51.070441883 +0000 UTC m=+0.128531685 container start ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_raman, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:42:51 compute-0 podman[95390]: 2026-01-26 09:42:51.074924155 +0000 UTC m=+0.133014007 container attach ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:51 compute-0 confident_raman[95436]: 167 167
Jan 26 09:42:51 compute-0 systemd[1]: libpod-ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758.scope: Deactivated successfully.
Jan 26 09:42:51 compute-0 conmon[95436]: conmon ebb96853f47b14aa0bab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758.scope/container/memory.events
Jan 26 09:42:51 compute-0 podman[95390]: 2026-01-26 09:42:51.077267508 +0000 UTC m=+0.135357350 container died ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_raman, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-397833f620104a90cc1477106f33da4377574a9b15be7bb3d36ae935932a0ae0-merged.mount: Deactivated successfully.
Jan 26 09:42:51 compute-0 podman[95390]: 2026-01-26 09:42:51.112537471 +0000 UTC m=+0.170627273 container remove ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:51 compute-0 systemd[1]: libpod-conmon-ebb96853f47b14aa0bab93af3c279fc9e03120ced6ec75bb723fbc5058462758.scope: Deactivated successfully.
Jan 26 09:42:51 compute-0 podman[95517]: 2026-01-26 09:42:51.262686644 +0000 UTC m=+0.039891089 container create 4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shtern, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:51 compute-0 systemd[1]: Started libpod-conmon-4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a.scope.
Jan 26 09:42:51 compute-0 sudo[95570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heosyqravcuxuaowllfbatxpyzcwmgpa ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769420570.9258466-37597-30532521004026/async_wrapper.py j954714073995 30 /home/zuul/.ansible/tmp/ansible-tmp-1769420570.9258466-37597-30532521004026/AnsiballZ_command.py _'
Jan 26 09:42:51 compute-0 sudo[95570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b6de8400032204a87582ab2dc6e6beda709bbb3a7354b0621146ac21fb7c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:51 compute-0 podman[95517]: 2026-01-26 09:42:51.242910555 +0000 UTC m=+0.020115020 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b6de8400032204a87582ab2dc6e6beda709bbb3a7354b0621146ac21fb7c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b6de8400032204a87582ab2dc6e6beda709bbb3a7354b0621146ac21fb7c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b6de8400032204a87582ab2dc6e6beda709bbb3a7354b0621146ac21fb7c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:51 compute-0 podman[95517]: 2026-01-26 09:42:51.351912737 +0000 UTC m=+0.129117202 container init 4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:42:51 compute-0 podman[95517]: 2026-01-26 09:42:51.358885467 +0000 UTC m=+0.136089932 container start 4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:51 compute-0 podman[95517]: 2026-01-26 09:42:51.36192229 +0000 UTC m=+0.139126735 container attach 4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shtern, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:51 compute-0 ansible-async_wrapper.py[95574]: Invoked with j954714073995 30 /home/zuul/.ansible/tmp/ansible-tmp-1769420570.9258466-37597-30532521004026/AnsiballZ_command.py _
Jan 26 09:42:51 compute-0 ansible-async_wrapper.py[95580]: Starting module and watcher
Jan 26 09:42:51 compute-0 ansible-async_wrapper.py[95580]: Start watching 95581 (30)
Jan 26 09:42:51 compute-0 ansible-async_wrapper.py[95581]: Start module (95581)
Jan 26 09:42:51 compute-0 ansible-async_wrapper.py[95574]: Return async_wrapper task started.
Jan 26 09:42:51 compute-0 sudo[95570]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:51 compute-0 python3[95582]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]: {
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:     "0": [
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:         {
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "devices": [
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "/dev/loop3"
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             ],
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "lv_name": "ceph_lv0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "lv_size": "21470642176",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "name": "ceph_lv0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "tags": {
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.cluster_name": "ceph",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.crush_device_class": "",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.encrypted": "0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.osd_id": "0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.type": "block",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.vdo": "0",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:                 "ceph.with_tpm": "0"
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             },
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "type": "block",
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:             "vg_name": "ceph_vg0"
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:         }
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]:     ]
Jan 26 09:42:51 compute-0 vibrant_shtern[95572]: }
Jan 26 09:42:51 compute-0 systemd[1]: libpod-4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a.scope: Deactivated successfully.
Jan 26 09:42:51 compute-0 podman[95517]: 2026-01-26 09:42:51.670552174 +0000 UTC m=+0.447756619 container died 4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shtern, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:51 compute-0 podman[95587]: 2026-01-26 09:42:51.684302819 +0000 UTC m=+0.049001757 container create 664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0 (image=quay.io/ceph/ceph:v19, name=admiring_curie, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f9b6de8400032204a87582ab2dc6e6beda709bbb3a7354b0621146ac21fb7c1-merged.mount: Deactivated successfully.
Jan 26 09:42:51 compute-0 podman[95517]: 2026-01-26 09:42:51.717551086 +0000 UTC m=+0.494755541 container remove 4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 09:42:51 compute-0 systemd[1]: Started libpod-conmon-664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0.scope.
Jan 26 09:42:51 compute-0 systemd[1]: libpod-conmon-4e279ff1f8a7958813e7245f024e32c7ea94819d146732b86a31696cf27c2a7a.scope: Deactivated successfully.
Jan 26 09:42:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f098d9a3ee51bcc0cba4ca2ca68d826ed88f6a854eff65b5fe206c50b616e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14f098d9a3ee51bcc0cba4ca2ca68d826ed88f6a854eff65b5fe206c50b616e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:51 compute-0 sudo[95300]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:51 compute-0 podman[95587]: 2026-01-26 09:42:51.666175045 +0000 UTC m=+0.030874003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:51 compute-0 podman[95587]: 2026-01-26 09:42:51.761315519 +0000 UTC m=+0.126014457 container init 664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0 (image=quay.io/ceph/ceph:v19, name=admiring_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:51 compute-0 podman[95587]: 2026-01-26 09:42:51.767641312 +0000 UTC m=+0.132340250 container start 664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0 (image=quay.io/ceph/ceph:v19, name=admiring_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:42:51 compute-0 podman[95587]: 2026-01-26 09:42:51.770996083 +0000 UTC m=+0.135695041 container attach 664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0 (image=quay.io/ceph/ceph:v19, name=admiring_curie, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:51 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 13 completed events
Jan 26 09:42:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:42:51 compute-0 sudo[95618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:51 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:51 compute-0 sudo[95618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:51 compute-0 sudo[95618]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:51 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 26 09:42:51 compute-0 sudo[95643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:42:51 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 26 09:42:51 compute-0 sudo[95643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 2.19 scrub starts
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 2.19 scrub ok
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 3.3 scrub starts
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 3.3 scrub ok
Jan 26 09:42:51 compute-0 ceph-mon[74456]: pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 5.1e scrub starts
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 5.1e scrub ok
Jan 26 09:42:51 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 3.1 scrub starts
Jan 26 09:42:51 compute-0 ceph-mon[74456]: 3.1 scrub ok
Jan 26 09:42:52 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:52 compute-0 admiring_curie[95613]: 
Jan 26 09:42:52 compute-0 admiring_curie[95613]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 09:42:52 compute-0 systemd[1]: libpod-664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0.scope: Deactivated successfully.
Jan 26 09:42:52 compute-0 podman[95587]: 2026-01-26 09:42:52.133233799 +0000 UTC m=+0.497932757 container died 664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0 (image=quay.io/ceph/ceph:v19, name=admiring_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-14f098d9a3ee51bcc0cba4ca2ca68d826ed88f6a854eff65b5fe206c50b616e0-merged.mount: Deactivated successfully.
Jan 26 09:42:52 compute-0 podman[95587]: 2026-01-26 09:42:52.176438548 +0000 UTC m=+0.541137486 container remove 664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0 (image=quay.io/ceph/ceph:v19, name=admiring_curie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:52 compute-0 systemd[1]: libpod-conmon-664193b3557f754b709b56dba87a9c47edcb7d844f12e9ace45c164fd0bd7fe0.scope: Deactivated successfully.
Jan 26 09:42:52 compute-0 ansible-async_wrapper.py[95581]: Module complete (95581)
Jan 26 09:42:52 compute-0 podman[95741]: 2026-01-26 09:42:52.232213707 +0000 UTC m=+0.033166334 container create db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:52 compute-0 systemd[1]: Started libpod-conmon-db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade.scope.
Jan 26 09:42:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:52 compute-0 podman[95741]: 2026-01-26 09:42:52.287410813 +0000 UTC m=+0.088363490 container init db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chaum, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:52 compute-0 podman[95741]: 2026-01-26 09:42:52.29318874 +0000 UTC m=+0.094141367 container start db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chaum, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:42:52 compute-0 nostalgic_chaum[95758]: 167 167
Jan 26 09:42:52 compute-0 podman[95741]: 2026-01-26 09:42:52.296228833 +0000 UTC m=+0.097181510 container attach db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:42:52 compute-0 systemd[1]: libpod-db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade.scope: Deactivated successfully.
Jan 26 09:42:52 compute-0 podman[95741]: 2026-01-26 09:42:52.296947813 +0000 UTC m=+0.097900460 container died db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:52 compute-0 podman[95741]: 2026-01-26 09:42:52.217581849 +0000 UTC m=+0.018534506 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab40dc4cd87dcdcb3b2c11c55bb9cedad6c1ac86d8a9893402630001382c6c4e-merged.mount: Deactivated successfully.
Jan 26 09:42:52 compute-0 podman[95741]: 2026-01-26 09:42:52.330384534 +0000 UTC m=+0.131337161 container remove db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:52 compute-0 systemd[1]: libpod-conmon-db5be80524a80f402df0f1ffe52d0100998a005a25f8fb66c5f2ea6e88a7fade.scope: Deactivated successfully.
Jan 26 09:42:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:52 compute-0 podman[95782]: 2026-01-26 09:42:52.459333021 +0000 UTC m=+0.033596007 container create b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mclean, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:42:52 compute-0 systemd[1]: Started libpod-conmon-b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08.scope.
Jan 26 09:42:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ceb447ca3abc2f8841476f07dc9c887d3b08abab479dfdcd0d2fbb16be769d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ceb447ca3abc2f8841476f07dc9c887d3b08abab479dfdcd0d2fbb16be769d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ceb447ca3abc2f8841476f07dc9c887d3b08abab479dfdcd0d2fbb16be769d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ceb447ca3abc2f8841476f07dc9c887d3b08abab479dfdcd0d2fbb16be769d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:52 compute-0 podman[95782]: 2026-01-26 09:42:52.521591097 +0000 UTC m=+0.095854103 container init b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:42:52 compute-0 podman[95782]: 2026-01-26 09:42:52.527515219 +0000 UTC m=+0.101778205 container start b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:42:52 compute-0 podman[95782]: 2026-01-26 09:42:52.530243053 +0000 UTC m=+0.104506059 container attach b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mclean, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:52 compute-0 podman[95782]: 2026-01-26 09:42:52.445328449 +0000 UTC m=+0.019591455 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:52 compute-0 sudo[95849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shjjrwhrlelsexcczcukgkjssbfqptgn ; /usr/bin/python3'
Jan 26 09:42:52 compute-0 sudo[95849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Jan 26 09:42:52 compute-0 python3[95851]: ansible-ansible.legacy.async_status Invoked with jid=j954714073995.95574 mode=status _async_dir=/root/.ansible_async
Jan 26 09:42:52 compute-0 sudo[95849]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:52 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 26 09:42:52 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 26 09:42:52 compute-0 sudo[95941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpnzebfrdnnndcoqklfxqviowvjtdjhn ; /usr/bin/python3'
Jan 26 09:42:52 compute-0 sudo[95941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:52 compute-0 ceph-mon[74456]: 5.f scrub starts
Jan 26 09:42:52 compute-0 ceph-mon[74456]: 5.f scrub ok
Jan 26 09:42:52 compute-0 ceph-mon[74456]: from='client.14526 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:52 compute-0 ceph-mon[74456]: 7.2 scrub starts
Jan 26 09:42:52 compute-0 ceph-mon[74456]: 7.2 scrub ok
Jan 26 09:42:53 compute-0 python3[95948]: ansible-ansible.legacy.async_status Invoked with jid=j954714073995.95574 mode=cleanup _async_dir=/root/.ansible_async
Jan 26 09:42:53 compute-0 sudo[95941]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:53 compute-0 lvm[95970]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:42:53 compute-0 lvm[95970]: VG ceph_vg0 finished
Jan 26 09:42:53 compute-0 jolly_mclean[95798]: {}
Jan 26 09:42:53 compute-0 systemd[1]: libpod-b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08.scope: Deactivated successfully.
Jan 26 09:42:53 compute-0 podman[95782]: 2026-01-26 09:42:53.216314078 +0000 UTC m=+0.790577064 container died b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mclean, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:53 compute-0 systemd[1]: libpod-b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08.scope: Consumed 1.028s CPU time.
Jan 26 09:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-26ceb447ca3abc2f8841476f07dc9c887d3b08abab479dfdcd0d2fbb16be769d-merged.mount: Deactivated successfully.
Jan 26 09:42:53 compute-0 podman[95782]: 2026-01-26 09:42:53.253894983 +0000 UTC m=+0.828157969 container remove b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mclean, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:53 compute-0 systemd[1]: libpod-conmon-b333ff23c798116ccb2b347e165699a3d7b909c9c92043da60488a9a0537cf08.scope: Deactivated successfully.
Jan 26 09:42:53 compute-0 sudo[95643]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:53 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 36651909-805a-45c6-9dbe-dca41addc4d5 (Updating rgw.rgw deployment (+1 -> 3))
Jan 26 09:42:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkzyup", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 26 09:42:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkzyup", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:42:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkzyup", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:42:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 26 09:42:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:53 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.qkzyup on compute-0
Jan 26 09:42:53 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.qkzyup on compute-0
Jan 26 09:42:53 compute-0 sudo[95986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:53 compute-0 sudo[95986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:53 compute-0 sudo[95986]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:53 compute-0 sudo[96011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:53 compute-0 sudo[96011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:53 compute-0 sudo[96059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-domqogxzhmezewiryrkyvsbfogphqztx ; /usr/bin/python3'
Jan 26 09:42:53 compute-0 sudo[96059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:53 compute-0 python3[96061]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:53 compute-0 podman[96069]: 2026-01-26 09:42:53.688307487 +0000 UTC m=+0.045865921 container create 72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc (image=quay.io/ceph/ceph:v19, name=inspiring_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 09:42:53 compute-0 systemd[1]: Started libpod-conmon-72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc.scope.
Jan 26 09:42:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:53 compute-0 podman[96069]: 2026-01-26 09:42:53.67043402 +0000 UTC m=+0.027992484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db0a1becd3e7d96711998c87b5787bc1886ba1f1686aad6b9ef1ffb3857276b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db0a1becd3e7d96711998c87b5787bc1886ba1f1686aad6b9ef1ffb3857276b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:53 compute-0 podman[96069]: 2026-01-26 09:42:53.782536667 +0000 UTC m=+0.140095111 container init 72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc (image=quay.io/ceph/ceph:v19, name=inspiring_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 09:42:53 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 26 09:42:53 compute-0 podman[96069]: 2026-01-26 09:42:53.793593537 +0000 UTC m=+0.151151981 container start 72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc (image=quay.io/ceph/ceph:v19, name=inspiring_germain, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:53 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 26 09:42:53 compute-0 podman[96069]: 2026-01-26 09:42:53.797471673 +0000 UTC m=+0.155030107 container attach 72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc (image=quay.io/ceph/ceph:v19, name=inspiring_germain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:53 compute-0 podman[96121]: 2026-01-26 09:42:53.830597007 +0000 UTC m=+0.032756524 container create 192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:42:53 compute-0 systemd[1]: Started libpod-conmon-192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad.scope.
Jan 26 09:42:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:53 compute-0 podman[96121]: 2026-01-26 09:42:53.892884845 +0000 UTC m=+0.095044372 container init 192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_robinson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:53 compute-0 podman[96121]: 2026-01-26 09:42:53.898971491 +0000 UTC m=+0.101131018 container start 192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:42:53 compute-0 stoic_robinson[96137]: 167 167
Jan 26 09:42:53 compute-0 podman[96121]: 2026-01-26 09:42:53.902451245 +0000 UTC m=+0.104610772 container attach 192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_robinson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:53 compute-0 systemd[1]: libpod-192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad.scope: Deactivated successfully.
Jan 26 09:42:53 compute-0 podman[96121]: 2026-01-26 09:42:53.903820513 +0000 UTC m=+0.105980040 container died 192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_robinson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:53 compute-0 podman[96121]: 2026-01-26 09:42:53.817013536 +0000 UTC m=+0.019173063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-999691b708c8482c0da9a91f77a871046940a8589b0f1779d68cbc505c5ccc29-merged.mount: Deactivated successfully.
Jan 26 09:42:53 compute-0 podman[96121]: 2026-01-26 09:42:53.937875992 +0000 UTC m=+0.140035509 container remove 192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_robinson, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:53 compute-0 systemd[1]: libpod-conmon-192b01b8bc233e37dced83b1709090345f211e885437f55c5da1a239f814d0ad.scope: Deactivated successfully.
Jan 26 09:42:53 compute-0 systemd[1]: Reloading.
Jan 26 09:42:54 compute-0 ceph-mon[74456]: 5.18 scrub starts
Jan 26 09:42:54 compute-0 ceph-mon[74456]: 5.18 scrub ok
Jan 26 09:42:54 compute-0 ceph-mon[74456]: pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Jan 26 09:42:54 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:54 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:54 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkzyup", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:42:54 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qkzyup", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:42:54 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:54 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:54 compute-0 ceph-mon[74456]: Deploying daemon rgw.rgw.compute-0.qkzyup on compute-0
Jan 26 09:42:54 compute-0 ceph-mon[74456]: 7.6 scrub starts
Jan 26 09:42:54 compute-0 ceph-mon[74456]: 7.6 scrub ok
Jan 26 09:42:54 compute-0 systemd-sysv-generator[96199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:42:54 compute-0 systemd-rc-local-generator[96196]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:42:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14532 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:54 compute-0 inspiring_germain[96105]: 
Jan 26 09:42:54 compute-0 inspiring_germain[96105]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 09:42:54 compute-0 podman[96069]: 2026-01-26 09:42:54.193977704 +0000 UTC m=+0.551536158 container died 72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc (image=quay.io/ceph/ceph:v19, name=inspiring_germain, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:42:54 compute-0 systemd[1]: libpod-72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc.scope: Deactivated successfully.
Jan 26 09:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db0a1becd3e7d96711998c87b5787bc1886ba1f1686aad6b9ef1ffb3857276b-merged.mount: Deactivated successfully.
Jan 26 09:42:54 compute-0 podman[96069]: 2026-01-26 09:42:54.23967963 +0000 UTC m=+0.597238064 container remove 72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc (image=quay.io/ceph/ceph:v19, name=inspiring_germain, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Jan 26 09:42:54 compute-0 systemd[1]: libpod-conmon-72d97a7903bf5a44f33aee1cb25f42075e45d544fb752d75aa15b8c7d08fc6fc.scope: Deactivated successfully.
Jan 26 09:42:54 compute-0 sudo[96059]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:54 compute-0 systemd[1]: Reloading.
Jan 26 09:42:54 compute-0 systemd-sysv-generator[96253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:42:54 compute-0 systemd-rc-local-generator[96250]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:42:54 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.qkzyup for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:42:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:54 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 26 09:42:54 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 26 09:42:54 compute-0 podman[96306]: 2026-01-26 09:42:54.761352443 +0000 UTC m=+0.047724832 container create 8a3d15367a4a40c864748ab48b0c1cbbc90a9feea0c4c03a6e55ec5f362fe42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-rgw-rgw-compute-0-qkzyup, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 26 09:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717dda151776b48ca76483b8a089b205ff01f7178341106327dc23c2ef9ecad4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717dda151776b48ca76483b8a089b205ff01f7178341106327dc23c2ef9ecad4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717dda151776b48ca76483b8a089b205ff01f7178341106327dc23c2ef9ecad4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717dda151776b48ca76483b8a089b205ff01f7178341106327dc23c2ef9ecad4/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.qkzyup supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:54 compute-0 podman[96306]: 2026-01-26 09:42:54.835055312 +0000 UTC m=+0.121427691 container init 8a3d15367a4a40c864748ab48b0c1cbbc90a9feea0c4c03a6e55ec5f362fe42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-rgw-rgw-compute-0-qkzyup, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:54 compute-0 podman[96306]: 2026-01-26 09:42:54.742461148 +0000 UTC m=+0.028833517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:54 compute-0 podman[96306]: 2026-01-26 09:42:54.852405596 +0000 UTC m=+0.138777945 container start 8a3d15367a4a40c864748ab48b0c1cbbc90a9feea0c4c03a6e55ec5f362fe42a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-rgw-rgw-compute-0-qkzyup, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:42:54 compute-0 bash[96306]: 8a3d15367a4a40c864748ab48b0c1cbbc90a9feea0c4c03a6e55ec5f362fe42a
Jan 26 09:42:54 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.qkzyup for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:42:54 compute-0 sudo[96011]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:54 compute-0 radosgw[96326]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 26 09:42:54 compute-0 radosgw[96326]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Jan 26 09:42:54 compute-0 radosgw[96326]: framework: beast
Jan 26 09:42:54 compute-0 radosgw[96326]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 26 09:42:54 compute-0 radosgw[96326]: init_numa not setting numa affinity
Jan 26 09:42:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 26 09:42:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:54 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 36651909-805a-45c6-9dbe-dca41addc4d5 (Updating rgw.rgw deployment (+1 -> 3))
Jan 26 09:42:54 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 36651909-805a-45c6-9dbe-dca41addc4d5 (Updating rgw.rgw deployment (+1 -> 3)) in 2 seconds
Jan 26 09:42:54 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:54 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 26 09:42:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:55 compute-0 sudo[96559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vamqzxbezzpqpmledruqisgbrckpjaot ; /usr/bin/python3'
Jan 26 09:42:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 26 09:42:55 compute-0 sudo[96559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:55 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev f89e814a-e4c2-461e-94c0-9d4ae432b796 (Updating mds.cephfs deployment (+3 -> 3))
Jan 26 09:42:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zprrum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 26 09:42:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zprrum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 09:42:55 compute-0 ceph-mon[74456]: 5.1b deep-scrub starts
Jan 26 09:42:55 compute-0 ceph-mon[74456]: 5.1b deep-scrub ok
Jan 26 09:42:55 compute-0 ceph-mon[74456]: 6.15 scrub starts
Jan 26 09:42:55 compute-0 ceph-mon[74456]: 6.15 scrub ok
Jan 26 09:42:55 compute-0 ceph-mon[74456]: from='client.14532 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:55 compute-0 ceph-mon[74456]: 7.18 scrub starts
Jan 26 09:42:55 compute-0 ceph-mon[74456]: 7.18 scrub ok
Jan 26 09:42:55 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:55 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:55 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:55 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zprrum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 09:42:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:55 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.zprrum on compute-2
Jan 26 09:42:55 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.zprrum on compute-2
Jan 26 09:42:55 compute-0 python3[96894]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:55 compute-0 podman[96940]: 2026-01-26 09:42:55.190259807 +0000 UTC m=+0.040801214 container create 5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92 (image=quay.io/ceph/ceph:v19, name=hungry_mestorf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 09:42:55 compute-0 radosgw[96326]: v1 topic migration: starting v1 topic migration..
Jan 26 09:42:55 compute-0 radosgw[96326]: LDAP not started since no server URIs were provided in the configuration.
Jan 26 09:42:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-rgw-rgw-compute-0-qkzyup[96322]: 2026-01-26T09:42:55.191+0000 7f3e765c9980 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 26 09:42:55 compute-0 radosgw[96326]: v1 topic migration: finished v1 topic migration
Jan 26 09:42:55 compute-0 radosgw[96326]: framework: beast
Jan 26 09:42:55 compute-0 radosgw[96326]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 26 09:42:55 compute-0 radosgw[96326]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 26 09:42:55 compute-0 systemd[1]: Started libpod-conmon-5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92.scope.
Jan 26 09:42:55 compute-0 radosgw[96326]: starting handler: beast
Jan 26 09:42:55 compute-0 radosgw[96326]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 09:42:55 compute-0 radosgw[96326]: mgrc service_daemon_register rgw.14550 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.qkzyup,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=88adcf12-6dc3-48b6-86bb-ed23fd934e78,zone_name=default,zonegroup_id=423841e2-30ae-45d1-92b7-7a24aa3d4488,zonegroup_name=default}
Jan 26 09:42:55 compute-0 podman[96940]: 2026-01-26 09:42:55.170618351 +0000 UTC m=+0.021159778 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/789fbc5918987c369854e70361de6fc7431cd89dac48146d08197feb5930fa85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/789fbc5918987c369854e70361de6fc7431cd89dac48146d08197feb5930fa85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:55 compute-0 podman[96940]: 2026-01-26 09:42:55.29163753 +0000 UTC m=+0.142178957 container init 5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92 (image=quay.io/ceph/ceph:v19, name=hungry_mestorf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:42:55 compute-0 podman[96940]: 2026-01-26 09:42:55.298860848 +0000 UTC m=+0.149402255 container start 5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92 (image=quay.io/ceph/ceph:v19, name=hungry_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:42:55 compute-0 podman[96940]: 2026-01-26 09:42:55.302845586 +0000 UTC m=+0.153386993 container attach 5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92 (image=quay.io/ceph/ceph:v19, name=hungry_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:42:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:55 compute-0 hungry_mestorf[96989]: 
Jan 26 09:42:55 compute-0 hungry_mestorf[96989]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 26 09:42:55 compute-0 systemd[1]: libpod-5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92.scope: Deactivated successfully.
Jan 26 09:42:55 compute-0 podman[96940]: 2026-01-26 09:42:55.666947262 +0000 UTC m=+0.517488669 container died 5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92 (image=quay.io/ceph/ceph:v19, name=hungry_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-789fbc5918987c369854e70361de6fc7431cd89dac48146d08197feb5930fa85-merged.mount: Deactivated successfully.
Jan 26 09:42:55 compute-0 podman[96940]: 2026-01-26 09:42:55.717984504 +0000 UTC m=+0.568525921 container remove 5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92 (image=quay.io/ceph/ceph:v19, name=hungry_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:55 compute-0 systemd[1]: libpod-conmon-5d566d71b71ccc81ade2be04d8ce076cb8db01fa66199b025c84f45200655e92.scope: Deactivated successfully.
Jan 26 09:42:55 compute-0 sudo[96559]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:55 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Jan 26 09:42:55 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Jan 26 09:42:56 compute-0 ceph-mon[74456]: pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:56 compute-0 ceph-mon[74456]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 09:42:56 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:56 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zprrum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 09:42:56 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zprrum", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 09:42:56 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:56 compute-0 ceph-mon[74456]: Deploying daemon mds.cephfs.compute-2.zprrum on compute-2
Jan 26 09:42:56 compute-0 ceph-mon[74456]: 6.a scrub starts
Jan 26 09:42:56 compute-0 ceph-mon[74456]: 6.a scrub ok
Jan 26 09:42:56 compute-0 ceph-mon[74456]: 7.8 deep-scrub starts
Jan 26 09:42:56 compute-0 ceph-mon[74456]: 7.8 deep-scrub ok
Jan 26 09:42:56 compute-0 ansible-async_wrapper.py[95580]: Done in kid B.
Jan 26 09:42:56 compute-0 sudo[97049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujtdwsbrodqtjycldsjnpnppseafnbn ; /usr/bin/python3'
Jan 26 09:42:56 compute-0 sudo[97049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:56 compute-0 python3[97051]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:56 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Jan 26 09:42:56 compute-0 podman[97052]: 2026-01-26 09:42:56.774558051 +0000 UTC m=+0.050555740 container create 265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386 (image=quay.io/ceph/ceph:v19, name=stoic_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 09:42:56 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Jan 26 09:42:56 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 14 completed events
Jan 26 09:42:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:42:56 compute-0 systemd[1]: Started libpod-conmon-265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386.scope.
Jan 26 09:42:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:42:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:42:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:56 compute-0 podman[97052]: 2026-01-26 09:42:56.750685939 +0000 UTC m=+0.026683658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 26 09:42:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60ce3bc709510add3f89a76ffb7e6e282363f71a3ddbab03d62c960a79ec6e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60ce3bc709510add3f89a76ffb7e6e282363f71a3ddbab03d62c960a79ec6e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zhqpiu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 26 09:42:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zhqpiu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 09:42:56 compute-0 podman[97052]: 2026-01-26 09:42:56.869064827 +0000 UTC m=+0.145062526 container init 265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386 (image=quay.io/ceph/ceph:v19, name=stoic_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 09:42:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zhqpiu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 09:42:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:56 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:56 compute-0 podman[97052]: 2026-01-26 09:42:56.881808834 +0000 UTC m=+0.157806523 container start 265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386 (image=quay.io/ceph/ceph:v19, name=stoic_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:56 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.zhqpiu on compute-0
Jan 26 09:42:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.zhqpiu on compute-0
Jan 26 09:42:56 compute-0 podman[97052]: 2026-01-26 09:42:56.885145606 +0000 UTC m=+0.161143305 container attach 265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386 (image=quay.io/ceph/ceph:v19, name=stoic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 09:42:56 compute-0 sudo[97071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:42:56 compute-0 sudo[97071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:56 compute-0 sudo[97071]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:57 compute-0 sudo[97096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:42:57 compute-0 sudo[97096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:57 compute-0 ceph-mon[74456]: 6.5 scrub starts
Jan 26 09:42:57 compute-0 ceph-mon[74456]: 6.5 scrub ok
Jan 26 09:42:57 compute-0 ceph-mon[74456]: 7.4 deep-scrub starts
Jan 26 09:42:57 compute-0 ceph-mon[74456]: 7.4 deep-scrub ok
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zhqpiu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zhqpiu", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 09:42:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e3 new map
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-26T09:42:57:034497+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:42:37.723319+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.zprrum{-1:24220} state up:standby seq 1 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] up:boot
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] as mds.0
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zprrum assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.zprrum"} v 0)
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zprrum"}]: dispatch
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e3 all = 0
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e4 new map
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-26T09:42:57:061062+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:42:57.061055+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24220}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.zprrum{0:24220} state up:creating seq 1 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:creating}
Jan 26 09:42:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.zprrum is now active in filesystem cephfs as rank 0
Jan 26 09:42:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:57 compute-0 stoic_pascal[97067]: 
Jan 26 09:42:57 compute-0 stoic_pascal[97067]: [{"container_id": "186f11669743", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.08%", "created": "2026-01-26T09:39:05.944225Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T09:42:38.867922Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2026-01-26T09:39:05.825732Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@crash.compute-0", "version": "19.2.3"}, {"container_id": "92f0d6d8766a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.43%", "created": "2026-01-26T09:40:07.563224Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-26T09:42:38.583406Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2026-01-26T09:40:07.447358Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@crash.compute-1", "version": "19.2.3"}, {"container_id": "cca202907db5", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.23%", "created": "2026-01-26T09:41:15.385522Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-26T09:42:38.829876Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2026-01-26T09:41:15.283042Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-2.zprrum", "daemon_name": "mds.cephfs.compute-2.zprrum", "daemon_type": "mds", "events": ["2026-01-26T09:42:56.846611Z daemon:mds.cephfs.compute-2.zprrum [INFO] \"Deployed mds.cephfs.compute-2.zprrum on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "0a039908c861", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "21.35%", "created": "2026-01-26T09:38:30.637286Z", "daemon_id": "compute-0.zllcia", "daemon_name": "mgr.compute-0.zllcia", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T09:42:38.867819Z", "memory_usage": 542113792, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-26T09:38:30.183499Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@mgr.compute-0.zllcia", "version": "19.2.3"}, {"container_id": "78ca290e02e4", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "38.92%", "created": "2026-01-26T09:41:12.808040Z", "daemon_id": "compute-1.xammti", "daemon_name": "mgr.compute-1.xammti", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-26T09:42:38.583829Z", "memory_usage": 504365056, "ports": [8765], "service_name": "mgr", "started": "2026-01-26T09:41:12.714802Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@mgr.compute-1.xammti", "version": "19.2.3"}, {"container_id": "ccc26a800171", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "28.76%", "created": "2026-01-26T09:41:07.281300Z", "daemon_id": "compute-2.oynaeu", "daemon_name": "mgr.compute-2.oynaeu", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-26T09:42:38.829774Z", "memory_usage": 503945625, "ports": [8765], "service_name": "mgr", "started": "2026-01-26T09:41:07.181895Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@mgr.compute-2.oynaeu", "version": "19.2.3"}, {"container_id": "3b123b7595d9", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.34%", "created": "2026-01-26T09:38:21.855052Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T09:42:38.867635Z", "memory_request": 2147483648, "memory_usage": 63187189, "ports": [], "service_name": "mon", "started": "2026-01-26T09:38:26.535991Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@mon.compute-0", "version": "19.2.3"}, {"container_id": "0913a1c63c0c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.13%", "created": "2026-01-26T09:41:05.539844Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-26T09:42:38.583729Z", "memory_request": 2147483648, "memory_usage": 48842670, "ports": [], "service_name": "mon", "started": "2026-01-26T09:41:05.438150Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@mon.compute-1", "version": "19.2.3"}, {"container_id": "a0b01604ce60", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.42%", "created": "2026-01-26T09:40:58.338055Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-26T09:42:38.829674Z", "memory_request": 2147483648, "memory_usage": 48653926, "ports": [], "service_name": "mon", "started": "2026-01-26T09:40:58.228819Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@mon.compute-2", "version": "19.2.3"}, {"container_id": "57a35f5609c0", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80", "quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.13%", "created": "2026-01-26T09:42:25.260417Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T09:42:38.868118Z", "memory_usage": 4174381, "ports": [9100], "service_name": "node-exporter", "started": "2026-01-26T09:42:25.171569Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@node-exporter.compute-0", "version": "1.7.0"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2026-01-26T09:42:45.647481Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2026-01-26T09:42:48.528246Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "cb8bebf3475b", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.67%", "created": "2026-01-26T09:40:19.912168Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T09:42:38.868020Z", "memory_request": 4294967296, "memory_usage": 75853987, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-26T09:40:19.769393Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@osd.0", "version": "19.2.3"}, {"container_id": "ba4e5e4834ef", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.58%", "created": "2026-01-26T09:40:19.023547Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-26T09:42:38.583607Z", "memory_request": 4294967296, "memory_usage": 71932313, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-26T09:40:18.926192Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@osd.1", "version": "19.2.3"}, {"container_id": "9a5c8fbab396", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.67%", "created": "2026-01-26T09:41:31.220956Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-26T09:42:38.829950Z", "memory_request": 4294967296, "memory_usage": 69195530, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-26T09:41:31.124634Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@osd.2", "version": "19.2.3"}, {"daemon_id": "rgw.compute-0.qkzyup", "daemon_name": "rgw.rgw.compute-0.qkzyup", "daemon_type": "rgw", "events": ["2026-01-26T09:42:54.972445Z daemon:rgw.rgw.compute-0.qkzyup [INFO] \"Deployed rgw.rgw.compute-0.qkzyup on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"container_id": "4dd5c8bd1cc8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.39%", "created": "2026-01-26T09:42:11.841636Z", "daemon_id": "rgw.compute-1.fbcidm", "daemon_name": "rgw.rgw.compute-1.fbcidm", "daemon_type": "rgw", "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "last_refresh": "2026-01-26T09:42:38.583929Z", "memory_usage": 101669928, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-26T09:42:11.730710Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@rgw.rgw.compute-1.fbcidm", "version": "19.2.3"}, {"container_id": "e0f6b52b546a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.43%", "created": "2026-01-26T09:42:10.175020Z", "daemon_id": "rgw.compute-2.fgzdbm", "daemon_name": "rgw.rgw.compute-2.fgzdbm", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2026-01-26T09:42:38.830030Z", "memory_usage": 102403932, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-26T09:42:10.069822Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@rgw.rgw.compute-2.fgzdbm", "version": "19.2.3"}]
Jan 26 09:42:57 compute-0 systemd[1]: libpod-265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386.scope: Deactivated successfully.
Jan 26 09:42:57 compute-0 podman[97052]: 2026-01-26 09:42:57.299345138 +0000 UTC m=+0.575342837 container died 265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386 (image=quay.io/ceph/ceph:v19, name=stoic_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:42:57 compute-0 rsyslogd[1007]: message too long (14938) with configured size 8096, begin of message is: [{"container_id": "186f11669743", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 09:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c60ce3bc709510add3f89a76ffb7e6e282363f71a3ddbab03d62c960a79ec6e4-merged.mount: Deactivated successfully.
Jan 26 09:42:57 compute-0 podman[97052]: 2026-01-26 09:42:57.336042279 +0000 UTC m=+0.612039958 container remove 265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386 (image=quay.io/ceph/ceph:v19, name=stoic_pascal, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:42:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:42:57 compute-0 systemd[1]: libpod-conmon-265a4586868fb2bc32a9e54359d30de9b34a7770fd3565b0143eb719aca73386.scope: Deactivated successfully.
Jan 26 09:42:57 compute-0 sudo[97049]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:57 compute-0 podman[97197]: 2026-01-26 09:42:57.4050206 +0000 UTC m=+0.038491901 container create 7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_jemison, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:42:57 compute-0 systemd[1]: Started libpod-conmon-7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c.scope.
Jan 26 09:42:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:57 compute-0 podman[97197]: 2026-01-26 09:42:57.477230418 +0000 UTC m=+0.110701739 container init 7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:57 compute-0 podman[97197]: 2026-01-26 09:42:57.387577694 +0000 UTC m=+0.021049025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:57 compute-0 podman[97197]: 2026-01-26 09:42:57.484510957 +0000 UTC m=+0.117982258 container start 7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 09:42:57 compute-0 zealous_jemison[97213]: 167 167
Jan 26 09:42:57 compute-0 podman[97197]: 2026-01-26 09:42:57.487320634 +0000 UTC m=+0.120791955 container attach 7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_jemison, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:42:57 compute-0 systemd[1]: libpod-7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c.scope: Deactivated successfully.
Jan 26 09:42:57 compute-0 podman[97197]: 2026-01-26 09:42:57.488558697 +0000 UTC m=+0.122029998 container died 7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1271078f4bc013133f652459832e6e8451017e15c7ea2be1439916d56933f6ad-merged.mount: Deactivated successfully.
Jan 26 09:42:57 compute-0 podman[97197]: 2026-01-26 09:42:57.521561777 +0000 UTC m=+0.155033098 container remove 7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:42:57 compute-0 systemd[1]: libpod-conmon-7840d9bb0f4d033f7908426207da3d776ff718f430137505ef5681acd666079c.scope: Deactivated successfully.
Jan 26 09:42:57 compute-0 systemd[1]: Reloading.
Jan 26 09:42:57 compute-0 systemd-rc-local-generator[97254]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:42:57 compute-0 systemd-sysv-generator[97257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:42:57 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 26 09:42:57 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 26 09:42:57 compute-0 systemd[1]: Reloading.
Jan 26 09:42:57 compute-0 systemd-rc-local-generator[97299]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:42:57 compute-0 systemd-sysv-generator[97303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:42:58 compute-0 ceph-mon[74456]: pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:42:58 compute-0 ceph-mon[74456]: Deploying daemon mds.cephfs.compute-0.zhqpiu on compute-0
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mds.? [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] up:boot
Jan 26 09:42:58 compute-0 ceph-mon[74456]: daemon mds.cephfs.compute-2.zprrum assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 26 09:42:58 compute-0 ceph-mon[74456]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 26 09:42:58 compute-0 ceph-mon[74456]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 26 09:42:58 compute-0 ceph-mon[74456]: Cluster is now healthy
Jan 26 09:42:58 compute-0 ceph-mon[74456]: fsmap cephfs:0 1 up:standby
Jan 26 09:42:58 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zprrum"}]: dispatch
Jan 26 09:42:58 compute-0 ceph-mon[74456]: fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:creating}
Jan 26 09:42:58 compute-0 ceph-mon[74456]: daemon mds.cephfs.compute-2.zprrum is now active in filesystem cephfs as rank 0
Jan 26 09:42:58 compute-0 ceph-mon[74456]: from='client.14562 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 09:42:58 compute-0 ceph-mon[74456]: 7.9 scrub starts
Jan 26 09:42:58 compute-0 ceph-mon[74456]: 7.9 scrub ok
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e5 new map
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-26T09:42:58:074031+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:42:58.074029+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24220}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24220 members: 24220
                                           [mds.cephfs.compute-2.zprrum{0:24220} state up:active seq 2 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] up:active
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active}
Jan 26 09:42:58 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.zhqpiu for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:42:58 compute-0 sudo[97388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfayfapndlquuvyjvyknxctlfbuopeps ; /usr/bin/python3'
Jan 26 09:42:58 compute-0 sudo[97388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:42:58 compute-0 podman[97371]: 2026-01-26 09:42:58.341998576 +0000 UTC m=+0.053293664 container create 11dd348e4aac36fd5e3cb263c5b329b7dc494b983700fcd6202151a1e06d5718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mds-cephfs-compute-0-zhqpiu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8186aff3c073d52cf90e98f9e3d09fbfda13d71bce25a457dde802c2d8036178/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8186aff3c073d52cf90e98f9e3d09fbfda13d71bce25a457dde802c2d8036178/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8186aff3c073d52cf90e98f9e3d09fbfda13d71bce25a457dde802c2d8036178/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8186aff3c073d52cf90e98f9e3d09fbfda13d71bce25a457dde802c2d8036178/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.zhqpiu supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:58 compute-0 podman[97371]: 2026-01-26 09:42:58.311833083 +0000 UTC m=+0.023128261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:42:58 compute-0 podman[97371]: 2026-01-26 09:42:58.40997979 +0000 UTC m=+0.121274908 container init 11dd348e4aac36fd5e3cb263c5b329b7dc494b983700fcd6202151a1e06d5718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mds-cephfs-compute-0-zhqpiu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:42:58 compute-0 podman[97371]: 2026-01-26 09:42:58.417682389 +0000 UTC m=+0.128977477 container start 11dd348e4aac36fd5e3cb263c5b329b7dc494b983700fcd6202151a1e06d5718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mds-cephfs-compute-0-zhqpiu, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:42:58 compute-0 bash[97371]: 11dd348e4aac36fd5e3cb263c5b329b7dc494b983700fcd6202151a1e06d5718
Jan 26 09:42:58 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.zhqpiu for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:42:58 compute-0 ceph-mds[97403]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 09:42:58 compute-0 ceph-mds[97403]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Jan 26 09:42:58 compute-0 ceph-mds[97403]: main not setting numa affinity
Jan 26 09:42:58 compute-0 ceph-mds[97403]: pidfile_write: ignore empty --pid-file
Jan 26 09:42:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mds-cephfs-compute-0-zhqpiu[97399]: starting mds.cephfs.compute-0.zhqpiu at 
Jan 26 09:42:58 compute-0 sudo[97096]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:58 compute-0 python3[97396]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:42:58 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Updating MDS map to version 5 from mon.0
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rbkelk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rbkelk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rbkelk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 09:42:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:42:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:58 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.rbkelk on compute-1
Jan 26 09:42:58 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.rbkelk on compute-1
Jan 26 09:42:58 compute-0 podman[97420]: 2026-01-26 09:42:58.560061021 +0000 UTC m=+0.055431142 container create 5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829 (image=quay.io/ceph/ceph:v19, name=lucid_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:42:58 compute-0 systemd[1]: Started libpod-conmon-5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829.scope.
Jan 26 09:42:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:42:58 compute-0 podman[97420]: 2026-01-26 09:42:58.534427462 +0000 UTC m=+0.029797613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e24f4f0539bb81afd36b0ca3795da1e6cbef36d91fc4113fbe13a619313485/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e24f4f0539bb81afd36b0ca3795da1e6cbef36d91fc4113fbe13a619313485/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:42:58 compute-0 podman[97420]: 2026-01-26 09:42:58.640989168 +0000 UTC m=+0.136359369 container init 5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829 (image=quay.io/ceph/ceph:v19, name=lucid_banach, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:42:58 compute-0 podman[97420]: 2026-01-26 09:42:58.647245188 +0000 UTC m=+0.142615299 container start 5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829 (image=quay.io/ceph/ceph:v19, name=lucid_banach, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:58 compute-0 podman[97420]: 2026-01-26 09:42:58.650391063 +0000 UTC m=+0.145761224 container attach 5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829 (image=quay.io/ceph/ceph:v19, name=lucid_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:42:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Jan 26 09:42:58 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 26 09:42:58 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 26 09:42:59 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620644567' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:42:59 compute-0 lucid_banach[97438]: 
Jan 26 09:42:59 compute-0 lucid_banach[97438]: {"fsid":"1a70b85d-e3fd-5814-8a6a-37ea00fcae30","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":108,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":60,"num_osds":3,"num_up_osds":3,"osd_up_since":1769420501,"num_in_osds":3,"osd_in_since":1769420478,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":89305088,"bytes_avail":64322621440,"bytes_total":64411926528},"fsmap":{"epoch":5,"btime":"2026-01-26T09:42:58:074031+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.zprrum","status":"up:active","gid":24220}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":6,"modified":"2026-01-26T09:42:56.710037+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.zllcia":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.xammti":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.oynaeu":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14550":{"start_epoch":6,"start_stamp":"2026-01-26T09:42:55.265376+0000","gid":14550,"addr":"192.168.122.100:0/4070303399","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.qkzyup","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"88adcf12-6dc3-48b6-86bb-ed23fd934e78","zone_name":"default","zonegroup_id":"423841e2-30ae-45d1-92b7-7a24aa3d4488","zonegroup_name":"default"},"task_status":{}},"24169":{"start_epoch":5,"start_stamp":"2026-01-26T09:42:18.842562+0000","gid":24169,"addr":"192.168.122.101:0/992292627","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.fbcidm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864304","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"88adcf12-6dc3-48b6-86bb-ed23fd934e78","zone_name":"default","zonegroup_id":"423841e2-30ae-45d1-92b7-7a24aa3d4488","zonegroup_name":"default"},"task_status":{}},"24172":{"start_epoch":5,"start_stamp":"2026-01-26T09:42:18.838857+0000","gid":24172,"addr":"192.168.122.102:0/1812478715","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.fgzdbm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864308","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"88adcf12-6dc3-48b6-86bb-ed23fd934e78","zone_name":"default","zonegroup_id":"423841e2-30ae-45d1-92b7-7a24aa3d4488","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"f89e814a-e4c2-461e-94c0-9d4ae432b796":{"message":"Updating mds.cephfs deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 26 09:42:59 compute-0 systemd[1]: libpod-5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829.scope: Deactivated successfully.
Jan 26 09:42:59 compute-0 podman[97420]: 2026-01-26 09:42:59.093606478 +0000 UTC m=+0.588976629 container died 5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829 (image=quay.io/ceph/ceph:v19, name=lucid_banach, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mds.? [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] up:active
Jan 26 09:42:59 compute-0 ceph-mon[74456]: fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active}
Jan 26 09:42:59 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:59 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:59 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:42:59 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rbkelk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 09:42:59 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rbkelk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 09:42:59 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:42:59 compute-0 ceph-mon[74456]: 7.b scrub starts
Jan 26 09:42:59 compute-0 ceph-mon[74456]: 7.b scrub ok
Jan 26 09:42:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/620644567' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 09:42:59 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Updating MDS map to version 6 from mon.0
Jan 26 09:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-61e24f4f0539bb81afd36b0ca3795da1e6cbef36d91fc4113fbe13a619313485-merged.mount: Deactivated successfully.
Jan 26 09:42:59 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Monitors have assigned me to become a standby
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e6 new map
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2026-01-26T09:42:59:090306+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:42:58.074029+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24220}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24220 members: 24220
                                           [mds.cephfs.compute-2.zprrum{0:24220} state up:active seq 2 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zhqpiu{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] compat {c=[1],r=[1],i=[1fff]}]
Jan 26 09:42:59 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] up:boot
Jan 26 09:42:59 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 1 up:standby
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zhqpiu"} v 0)
Jan 26 09:42:59 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zhqpiu"}]: dispatch
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e6 all = 0
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e7 new map
Jan 26 09:42:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2026-01-26T09:42:59:119781+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:42:58.074029+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24220}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24220 members: 24220
                                           [mds.cephfs.compute-2.zprrum{0:24220} state up:active seq 2 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zhqpiu{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] compat {c=[1],r=[1],i=[1fff]}]
Jan 26 09:42:59 compute-0 podman[97420]: 2026-01-26 09:42:59.130571265 +0000 UTC m=+0.625941416 container remove 5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829 (image=quay.io/ceph/ceph:v19, name=lucid_banach, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 09:42:59 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 1 up:standby
Jan 26 09:42:59 compute-0 systemd[1]: libpod-conmon-5d21359f6d3c172d9e7624c3c6b459955e3c59e61558baac7ecd17c61767a829.scope: Deactivated successfully.
Jan 26 09:42:59 compute-0 sudo[97388]: pam_unix(sudo:session): session closed for user root
Jan 26 09:42:59 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 26 09:42:59 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 26 09:43:00 compute-0 sudo[97497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aprvrwcmzeydwnlbnxgxprgnflgplffz ; /usr/bin/python3'
Jan 26 09:43:00 compute-0 sudo[97497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: Deploying daemon mds.cephfs.compute-1.rbkelk on compute-1
Jan 26 09:43:00 compute-0 ceph-mon[74456]: pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mds.? [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] up:boot
Jan 26 09:43:00 compute-0 ceph-mon[74456]: fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 1 up:standby
Jan 26 09:43:00 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zhqpiu"}]: dispatch
Jan 26 09:43:00 compute-0 ceph-mon[74456]: fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 1 up:standby
Jan 26 09:43:00 compute-0 ceph-mon[74456]: 7.1e scrub starts
Jan 26 09:43:00 compute-0 ceph-mon[74456]: 7.1e scrub ok
Jan 26 09:43:00 compute-0 python3[97499]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev f89e814a-e4c2-461e-94c0-9d4ae432b796 (Updating mds.cephfs deployment (+3 -> 3))
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event f89e814a-e4c2-461e-94c0-9d4ae432b796 (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 26 09:43:00 compute-0 podman[97500]: 2026-01-26 09:43:00.218697492 +0000 UTC m=+0.057663663 container create 8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd (image=quay.io/ceph/ceph:v19, name=musing_hopper, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 8bcc3328-08fe-43c9-8732-fb9cbf37c6e5 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.thyhvc
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.thyhvc
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 26 09:43:00 compute-0 systemd[1]: Started libpod-conmon-8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd.scope.
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7013b2909444bf436cadaff1f260f5098b9ac2785516b2daffd0510de2e07812/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7013b2909444bf436cadaff1f260f5098b9ac2785516b2daffd0510de2e07812/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:00 compute-0 podman[97500]: 2026-01-26 09:43:00.2923242 +0000 UTC m=+0.131290391 container init 8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd (image=quay.io/ceph/ceph:v19, name=musing_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:43:00 compute-0 podman[97500]: 2026-01-26 09:43:00.202906262 +0000 UTC m=+0.041872463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:43:00 compute-0 podman[97500]: 2026-01-26 09:43:00.30222683 +0000 UTC m=+0.141193001 container start 8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd (image=quay.io/ceph/ceph:v19, name=musing_hopper, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 09:43:00 compute-0 podman[97500]: 2026-01-26 09:43:00.305484758 +0000 UTC m=+0.144450949 container attach 8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd (image=quay.io/ceph/ceph:v19, name=musing_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.thyhvc-rgw
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.thyhvc-rgw
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.thyhvc's ganesha conf is defaulting to empty
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.thyhvc's ganesha conf is defaulting to empty
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.thyhvc on compute-1
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.thyhvc on compute-1
Jan 26 09:43:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 26 09:43:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/121448194' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:43:00 compute-0 musing_hopper[97516]: 
Jan 26 09:43:00 compute-0 musing_hopper[97516]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.zllcia/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.xammti/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.oynaeu/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.qkzyup","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.fbcidm","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.fgzdbm","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 26 09:43:00 compute-0 systemd[1]: libpod-8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd.scope: Deactivated successfully.
Jan 26 09:43:00 compute-0 podman[97500]: 2026-01-26 09:43:00.705129475 +0000 UTC m=+0.544095666 container died 8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd (image=quay.io/ceph/ceph:v19, name=musing_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Jan 26 09:43:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Jan 26 09:43:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7013b2909444bf436cadaff1f260f5098b9ac2785516b2daffd0510de2e07812-merged.mount: Deactivated successfully.
Jan 26 09:43:00 compute-0 podman[97500]: 2026-01-26 09:43:00.748405555 +0000 UTC m=+0.587371726 container remove 8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd (image=quay.io/ceph/ceph:v19, name=musing_hopper, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:43:00 compute-0 systemd[1]: libpod-conmon-8e1c9be6c6747cf72e2178d366c0f996011c3bd6b8e3720beeb0871db84831fd.scope: Deactivated successfully.
Jan 26 09:43:00 compute-0 sudo[97497]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:01 compute-0 ceph-mon[74456]: Creating key for client.nfs.cephfs.0.0.compute-1.thyhvc
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 26 09:43:01 compute-0 ceph-mon[74456]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 26 09:43:01 compute-0 ceph-mon[74456]: Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:01 compute-0 ceph-mon[74456]: Creating key for client.nfs.cephfs.0.0.compute-1.thyhvc-rgw
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.thyhvc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/121448194' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e8 new map
Jan 26 09:43:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2026-01-26T09:43:01:199992+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:43:01.101127+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24220}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24220 members: 24220
                                           [mds.cephfs.compute-2.zprrum{0:24220} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zhqpiu{-1:14568} state up:standby seq 1 addr [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.rbkelk{-1:24194} state up:standby seq 1 addr [v2:192.168.122.101:6804/4143393925,v1:192.168.122.101:6805/4143393925] compat {c=[1],r=[1],i=[1fff]}]
Jan 26 09:43:01 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/4143393925,v1:192.168.122.101:6805/4143393925] up:boot
Jan 26 09:43:01 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] up:active
Jan 26 09:43:01 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 2 up:standby
Jan 26 09:43:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.rbkelk"} v 0)
Jan 26 09:43:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.rbkelk"}]: dispatch
Jan 26 09:43:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e8 all = 0
Jan 26 09:43:01 compute-0 sudo[97610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgfoeoouvvfqptlsconfrckyswzaesh ; /usr/bin/python3'
Jan 26 09:43:01 compute-0 sudo[97610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:43:01 compute-0 python3[97612]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:43:01 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 15 completed events
Jan 26 09:43:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:43:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:01 compute-0 podman[97613]: 2026-01-26 09:43:01.884849228 +0000 UTC m=+0.047267189 container create 18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada (image=quay.io/ceph/ceph:v19, name=vigilant_cohen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:43:01 compute-0 systemd[1]: Started libpod-conmon-18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada.scope.
Jan 26 09:43:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f24ce8779086a8f4fa75edc2b123b6fe785fb727600b090aa579a2c9546bc15/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f24ce8779086a8f4fa75edc2b123b6fe785fb727600b090aa579a2c9546bc15/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:01 compute-0 podman[97613]: 2026-01-26 09:43:01.866790976 +0000 UTC m=+0.029208957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:43:01 compute-0 podman[97613]: 2026-01-26 09:43:01.983300673 +0000 UTC m=+0.145718634 container init 18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada (image=quay.io/ceph/ceph:v19, name=vigilant_cohen, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:43:01 compute-0 podman[97613]: 2026-01-26 09:43:01.988795803 +0000 UTC m=+0.151213774 container start 18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada (image=quay.io/ceph/ceph:v19, name=vigilant_cohen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Jan 26 09:43:01 compute-0 podman[97613]: 2026-01-26 09:43:01.992330459 +0000 UTC m=+0.154748420 container attach 18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada (image=quay.io/ceph/ceph:v19, name=vigilant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:02 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.najyrz
Jan 26 09:43:02 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.najyrz
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 26 09:43:02 compute-0 ceph-mgr[74755]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 26 09:43:02 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:02 compute-0 ceph-mon[74456]: Bind address in nfs.cephfs.0.0.compute-1.thyhvc's ganesha conf is defaulting to empty
Jan 26 09:43:02 compute-0 ceph-mon[74456]: Deploying daemon nfs.cephfs.0.0.compute-1.thyhvc on compute-1
Jan 26 09:43:02 compute-0 ceph-mon[74456]: pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mds.? [v2:192.168.122.101:6804/4143393925,v1:192.168.122.101:6805/4143393925] up:boot
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mds.? [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] up:active
Jan 26 09:43:02 compute-0 ceph-mon[74456]: fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 2 up:standby
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.rbkelk"}]: dispatch
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 26 09:43:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 26 09:43:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59313208' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 26 09:43:02 compute-0 vigilant_cohen[97629]: mimic
Jan 26 09:43:02 compute-0 systemd[1]: libpod-18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada.scope: Deactivated successfully.
Jan 26 09:43:02 compute-0 podman[97613]: 2026-01-26 09:43:02.368988338 +0000 UTC m=+0.531406319 container died 18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada (image=quay.io/ceph/ceph:v19, name=vigilant_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f24ce8779086a8f4fa75edc2b123b6fe785fb727600b090aa579a2c9546bc15-merged.mount: Deactivated successfully.
Jan 26 09:43:02 compute-0 podman[97613]: 2026-01-26 09:43:02.402646096 +0000 UTC m=+0.565064047 container remove 18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada (image=quay.io/ceph/ceph:v19, name=vigilant_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 09:43:02 compute-0 systemd[1]: libpod-conmon-18433386daf0a5c425f7207a8ea9360a0f82ca2a8dd6db8d241bd3c0b5ccbada.scope: Deactivated successfully.
Jan 26 09:43:02 compute-0 sudo[97610]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Jan 26 09:43:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e9 new map
Jan 26 09:43:03 compute-0 ceph-mon[74456]: Creating key for client.nfs.cephfs.1.0.compute-2.najyrz
Jan 26 09:43:03 compute-0 ceph-mon[74456]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 26 09:43:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/59313208' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 26 09:43:03 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Updating MDS map to version 9 from mon.0
Jan 26 09:43:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2026-01-26T09:43:03:225984+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:43:01.101127+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24220}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24220 members: 24220
                                           [mds.cephfs.compute-2.zprrum{0:24220} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zhqpiu{-1:14568} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.rbkelk{-1:24194} state up:standby seq 1 addr [v2:192.168.122.101:6804/4143393925,v1:192.168.122.101:6805/4143393925] compat {c=[1],r=[1],i=[1fff]}]
Jan 26 09:43:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] up:standby
Jan 26 09:43:03 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 2 up:standby
Jan 26 09:43:03 compute-0 sudo[97705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbkuvvqhkyhppxlptszfkkpbqgwvdnfz ; /usr/bin/python3'
Jan 26 09:43:03 compute-0 sudo[97705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:43:03 compute-0 python3[97707]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:43:03 compute-0 podman[97708]: 2026-01-26 09:43:03.617216401 +0000 UTC m=+0.043841877 container create 3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094 (image=quay.io/ceph/ceph:v19, name=zealous_cerf, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:43:03 compute-0 systemd[1]: Started libpod-conmon-3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094.scope.
Jan 26 09:43:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c215212c54856930177c078965cbdc64c67b90c2f171a42815616b9f789dfbb8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c215212c54856930177c078965cbdc64c67b90c2f171a42815616b9f789dfbb8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:03 compute-0 podman[97708]: 2026-01-26 09:43:03.598969584 +0000 UTC m=+0.025595070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:43:03 compute-0 podman[97708]: 2026-01-26 09:43:03.697763327 +0000 UTC m=+0.124388813 container init 3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094 (image=quay.io/ceph/ceph:v19, name=zealous_cerf, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 09:43:03 compute-0 podman[97708]: 2026-01-26 09:43:03.703940876 +0000 UTC m=+0.130566332 container start 3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094 (image=quay.io/ceph/ceph:v19, name=zealous_cerf, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:43:03 compute-0 podman[97708]: 2026-01-26 09:43:03.708922571 +0000 UTC m=+0.135548067 container attach 3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094 (image=quay.io/ceph/ceph:v19, name=zealous_cerf, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:43:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 26 09:43:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572231238' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 26 09:43:04 compute-0 zealous_cerf[97723]: 
Jan 26 09:43:04 compute-0 zealous_cerf[97723]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Jan 26 09:43:04 compute-0 systemd[1]: libpod-3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094.scope: Deactivated successfully.
Jan 26 09:43:04 compute-0 podman[97708]: 2026-01-26 09:43:04.156510297 +0000 UTC m=+0.583135753 container died 3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094 (image=quay.io/ceph/ceph:v19, name=zealous_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c215212c54856930177c078965cbdc64c67b90c2f171a42815616b9f789dfbb8-merged.mount: Deactivated successfully.
Jan 26 09:43:04 compute-0 podman[97708]: 2026-01-26 09:43:04.190765453 +0000 UTC m=+0.617390919 container remove 3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094 (image=quay.io/ceph/ceph:v19, name=zealous_cerf, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:43:04 compute-0 systemd[1]: libpod-conmon-3b2332dd509a4261cc8237c826799482a570a18101b6229fc9f9193dc0313094.scope: Deactivated successfully.
Jan 26 09:43:04 compute-0 sudo[97705]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e10 new map
Jan 26 09:43:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2026-01-26T09:43:04:246912+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-26T09:42:37.723319+0000
                                           modified        2026-01-26T09:43:01.101127+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24220}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24220 members: 24220
                                           [mds.cephfs.compute-2.zprrum{0:24220} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1987962990,v1:192.168.122.102:6805/1987962990] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.zhqpiu{-1:14568} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.rbkelk{-1:24194} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/4143393925,v1:192.168.122.101:6805/4143393925] compat {c=[1],r=[1],i=[1fff]}]
Jan 26 09:43:04 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/4143393925,v1:192.168.122.101:6805/4143393925] up:standby
Jan 26 09:43:04 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 2 up:standby
Jan 26 09:43:04 compute-0 ceph-mon[74456]: pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.2 KiB/s wr, 85 op/s
Jan 26 09:43:04 compute-0 ceph-mon[74456]: mds.? [v2:192.168.122.100:6806/4011782606,v1:192.168.122.100:6807/4011782606] up:standby
Jan 26 09:43:04 compute-0 ceph-mon[74456]: fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 2 up:standby
Jan 26 09:43:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1572231238' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 26 09:43:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 87 op/s
Jan 26 09:43:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 26 09:43:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 26 09:43:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.najyrz-rgw
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.najyrz-rgw
Jan 26 09:43:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 26 09:43:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:43:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.najyrz's ganesha conf is defaulting to empty
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.najyrz's ganesha conf is defaulting to empty
Jan 26 09:43:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:43:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.najyrz on compute-2
Jan 26 09:43:05 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.najyrz on compute-2
Jan 26 09:43:05 compute-0 ceph-mon[74456]: mds.? [v2:192.168.122.101:6804/4143393925,v1:192.168.122.101:6805/4143393925] up:standby
Jan 26 09:43:05 compute-0 ceph-mon[74456]: fsmap cephfs:1 {0=cephfs.compute-2.zprrum=up:active} 2 up:standby
Jan 26 09:43:05 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 26 09:43:05 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 26 09:43:05 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:43:05 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.najyrz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:43:05 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:06 compute-0 ceph-mon[74456]: pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 87 op/s
Jan 26 09:43:06 compute-0 ceph-mon[74456]: Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:06 compute-0 ceph-mon[74456]: Creating key for client.nfs.cephfs.1.0.compute-2.najyrz-rgw
Jan 26 09:43:06 compute-0 ceph-mon[74456]: Bind address in nfs.cephfs.1.0.compute-2.najyrz's ganesha conf is defaulting to empty
Jan 26 09:43:06 compute-0 ceph-mon[74456]: Deploying daemon nfs.cephfs.1.0.compute-2.najyrz on compute-2
Jan 26 09:43:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 87 op/s
Jan 26 09:43:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:43:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:43:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:43:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:43:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:43:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:43:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:07 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.zfynkw
Jan 26 09:43:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 26 09:43:07 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.zfynkw
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 26 09:43:07 compute-0 ceph-mgr[74755]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 26 09:43:07 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 26 09:43:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 26 09:43:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:43:07 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:08 compute-0 ceph-mon[74456]: pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 87 op/s
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:08 compute-0 ceph-mon[74456]: Creating key for client.nfs.cephfs.2.0.compute-0.zfynkw
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 26 09:43:08 compute-0 ceph-mon[74456]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 26 09:43:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.7 KiB/s wr, 89 op/s
Jan 26 09:43:10 compute-0 ceph-mon[74456]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.7 KiB/s wr, 89 op/s
Jan 26 09:43:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 26 09:43:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 26 09:43:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.zfynkw-rgw
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.zfynkw-rgw
Jan 26 09:43:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 26 09:43:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:43:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.zfynkw's ganesha conf is defaulting to empty
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.zfynkw's ganesha conf is defaulting to empty
Jan 26 09:43:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:43:10 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.zfynkw on compute-0
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.zfynkw on compute-0
Jan 26 09:43:10 compute-0 sudo[97816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:43:10 compute-0 sudo[97816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:10 compute-0 sudo[97816]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:10 compute-0 sudo[97841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:43:10 compute-0 sudo[97841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Jan 26 09:43:11 compute-0 podman[97907]: 2026-01-26 09:43:11.009091076 +0000 UTC m=+0.076824870 container create a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:43:11 compute-0 systemd[1]: Started libpod-conmon-a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233.scope.
Jan 26 09:43:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:11 compute-0 podman[97907]: 2026-01-26 09:43:10.97729733 +0000 UTC m=+0.045031144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:43:11 compute-0 podman[97907]: 2026-01-26 09:43:11.075692504 +0000 UTC m=+0.143426318 container init a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 26 09:43:11 compute-0 podman[97907]: 2026-01-26 09:43:11.080625844 +0000 UTC m=+0.148359628 container start a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:43:11 compute-0 hardcore_bohr[97924]: 167 167
Jan 26 09:43:11 compute-0 podman[97907]: 2026-01-26 09:43:11.083103345 +0000 UTC m=+0.150837169 container attach a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:43:11 compute-0 systemd[1]: libpod-a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233.scope: Deactivated successfully.
Jan 26 09:43:11 compute-0 podman[97907]: 2026-01-26 09:43:11.084453553 +0000 UTC m=+0.152187367 container died a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 09:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-41c4d23f7bbd003d125e28d0e980d1e6fccfa472b21d19b92f8a7cf7cf39a6d8-merged.mount: Deactivated successfully.
Jan 26 09:43:11 compute-0 podman[97907]: 2026-01-26 09:43:11.11767892 +0000 UTC m=+0.185412714 container remove a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_bohr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:43:11 compute-0 systemd[1]: libpod-conmon-a0caf9c3a8c9d2ebf33f4160c64069b45ecac029c9e105f63d7a7807b02f8233.scope: Deactivated successfully.
Jan 26 09:43:11 compute-0 systemd[1]: Reloading.
Jan 26 09:43:11 compute-0 systemd-rc-local-generator[97964]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:11 compute-0 systemd-sysv-generator[97969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 26 09:43:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 26 09:43:11 compute-0 ceph-mon[74456]: Rados config object exists: conf-nfs.cephfs
Jan 26 09:43:11 compute-0 ceph-mon[74456]: Creating key for client.nfs.cephfs.2.0.compute-0.zfynkw-rgw
Jan 26 09:43:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:43:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.zfynkw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 09:43:11 compute-0 ceph-mon[74456]: Bind address in nfs.cephfs.2.0.compute-0.zfynkw's ganesha conf is defaulting to empty
Jan 26 09:43:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:43:11 compute-0 ceph-mon[74456]: Deploying daemon nfs.cephfs.2.0.compute-0.zfynkw on compute-0
Jan 26 09:43:11 compute-0 systemd[1]: Reloading.
Jan 26 09:43:11 compute-0 systemd-rc-local-generator[98005]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:11 compute-0 systemd-sysv-generator[98008]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:11 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:43:11 compute-0 podman[98062]: 2026-01-26 09:43:11.873530224 +0000 UTC m=+0.038726413 container create d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8fcb101861e368803c33a29bf93002c5f91e6b91c443454877fb31bb48be69/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8fcb101861e368803c33a29bf93002c5f91e6b91c443454877fb31bb48be69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8fcb101861e368803c33a29bf93002c5f91e6b91c443454877fb31bb48be69/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8fcb101861e368803c33a29bf93002c5f91e6b91c443454877fb31bb48be69/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:11 compute-0 podman[98062]: 2026-01-26 09:43:11.927273806 +0000 UTC m=+0.092470025 container init d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:43:11 compute-0 podman[98062]: 2026-01-26 09:43:11.931681802 +0000 UTC m=+0.096877991 container start d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:43:11 compute-0 bash[98062]: d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83
Jan 26 09:43:11 compute-0 podman[98062]: 2026-01-26 09:43:11.854966126 +0000 UTC m=+0.020162355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:43:11 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:43:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:11 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:43:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:11 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:43:11 compute-0 sudo[97841]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:43:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:43:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:43:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:43:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:43:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:43:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:43:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:43:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:43:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 8bcc3328-08fe-43c9-8732-fb9cbf37c6e5 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 26 09:43:12 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 8bcc3328-08fe-43c9-8732-fb9cbf37c6e5 (Updating nfs.cephfs deployment (+3 -> 3)) in 12 seconds
Jan 26 09:43:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:43:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev e819fdee-21b8-43fb-86fd-87a2e9251b37 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 26 09:43:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Jan 26 09:43:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.nsxfyf on compute-1
Jan 26 09:43:12 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.nsxfyf on compute-1
Jan 26 09:43:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:12 compute-0 ceph-mon[74456]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Jan 26 09:43:12 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:43:13 compute-0 ceph-mon[74456]: Deploying daemon haproxy.nfs.cephfs.compute-1.nsxfyf on compute-1
Jan 26 09:43:13 compute-0 ceph-mon[74456]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:43:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:13 : epoch 6977372f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:43:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Jan 26 09:43:16 compute-0 ceph-mon[74456]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Jan 26 09:43:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:43:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:43:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:43:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:43:16 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 16 completed events
Jan 26 09:43:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:43:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:16 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.eucyze on compute-0
Jan 26 09:43:16 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.eucyze on compute-0
Jan 26 09:43:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:16 compute-0 sudo[98131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:43:16 compute-0 sudo[98131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:16 compute-0 sudo[98131]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:16 compute-0 sudo[98156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:43:16 compute-0 sudo[98156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:17 compute-0 ceph-mon[74456]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:43:17 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:17 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:17 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:17 compute-0 ceph-mon[74456]: Deploying daemon haproxy.nfs.cephfs.compute-0.eucyze on compute-0
Jan 26 09:43:17 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:18 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb910000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 26 09:43:19 compute-0 podman[98222]: 2026-01-26 09:43:19.704145369 +0000 UTC m=+2.304853979 container create 37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144 (image=quay.io/ceph/haproxy:2.3, name=silly_clarke)
Jan 26 09:43:19 compute-0 podman[98222]: 2026-01-26 09:43:19.683406188 +0000 UTC m=+2.284114828 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 26 09:43:19 compute-0 systemd[1]: Started libpod-conmon-37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144.scope.
Jan 26 09:43:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:19 compute-0 podman[98222]: 2026-01-26 09:43:19.769510861 +0000 UTC m=+2.370219501 container init 37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144 (image=quay.io/ceph/haproxy:2.3, name=silly_clarke)
Jan 26 09:43:19 compute-0 podman[98222]: 2026-01-26 09:43:19.777119808 +0000 UTC m=+2.377828418 container start 37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144 (image=quay.io/ceph/haproxy:2.3, name=silly_clarke)
Jan 26 09:43:19 compute-0 podman[98222]: 2026-01-26 09:43:19.780455253 +0000 UTC m=+2.381163883 container attach 37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144 (image=quay.io/ceph/haproxy:2.3, name=silly_clarke)
Jan 26 09:43:19 compute-0 silly_clarke[98340]: 0 0
Jan 26 09:43:19 compute-0 systemd[1]: libpod-37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144.scope: Deactivated successfully.
Jan 26 09:43:19 compute-0 podman[98222]: 2026-01-26 09:43:19.783033026 +0000 UTC m=+2.383741636 container died 37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144 (image=quay.io/ceph/haproxy:2.3, name=silly_clarke)
Jan 26 09:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-89ebfb2fc2cfec29192d29ae86f0619bc8ee35c99e2744c5708b9f3b96dc5a4c-merged.mount: Deactivated successfully.
Jan 26 09:43:19 compute-0 podman[98222]: 2026-01-26 09:43:19.821652347 +0000 UTC m=+2.422360957 container remove 37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144 (image=quay.io/ceph/haproxy:2.3, name=silly_clarke)
Jan 26 09:43:19 compute-0 systemd[1]: libpod-conmon-37c579b92922b1551c64ed9b86127cfde8bf5d213765c19386609f91e2fda144.scope: Deactivated successfully.
Jan 26 09:43:19 compute-0 ceph-mon[74456]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 26 09:43:19 compute-0 systemd[1]: Reloading.
Jan 26 09:43:19 compute-0 systemd-rc-local-generator[98388]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:19 compute-0 systemd-sysv-generator[98392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:20 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:20 compute-0 systemd[1]: Reloading.
Jan 26 09:43:20 compute-0 systemd-rc-local-generator[98428]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:20 compute-0 systemd-sysv-generator[98432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:20 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.eucyze for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:43:20 compute-0 podman[98486]: 2026-01-26 09:43:20.604069149 +0000 UTC m=+0.041017159 container create 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77458c09bb507714ae4ea17b5d05fd39ea8a7bcac7171028414fb711d8f0770c/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:20 compute-0 podman[98486]: 2026-01-26 09:43:20.65710232 +0000 UTC m=+0.094050350 container init 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:43:20 compute-0 podman[98486]: 2026-01-26 09:43:20.661549987 +0000 UTC m=+0.098497997 container start 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:43:20 compute-0 bash[98486]: 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653
Jan 26 09:43:20 compute-0 podman[98486]: 2026-01-26 09:43:20.585127279 +0000 UTC m=+0.022075309 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 26 09:43:20 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.eucyze for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:43:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [NOTICE] 025/094320 (2) : New worker #1 (4) forked
Jan 26 09:43:20 compute-0 sudo[98156]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:43:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 26 09:43:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:43:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:43:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:20 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.rbycaf on compute-2
Jan 26 09:43:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.rbycaf on compute-2
Jan 26 09:43:21 compute-0 ceph-mon[74456]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 26 09:43:21 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:21 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:21 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:21 compute-0 ceph-mon[74456]: Deploying daemon haproxy.nfs.cephfs.compute-2.rbycaf on compute-2
Jan 26 09:43:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904001230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 26 09:43:23 compute-0 ceph-mon[74456]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 26 09:43:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 26 09:43:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:43:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:43:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:43:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Jan 26 09:43:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.wvnxoh on compute-1
Jan 26 09:43:24 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.wvnxoh on compute-1
Jan 26 09:43:25 compute-0 ceph-mon[74456]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 26 09:43:25 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:25 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:25 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:25 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:25 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:25 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:25 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:25 compute-0 ceph-mon[74456]: Deploying daemon keepalived.nfs.cephfs.compute-1.wvnxoh on compute-1
Jan 26 09:43:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 938 B/s wr, 4 op/s
Jan 26 09:43:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:27 compute-0 ceph-mon[74456]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 938 B/s wr, 4 op/s
Jan 26 09:43:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 938 B/s wr, 4 op/s
Jan 26 09:43:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:43:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:43:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:43:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.orrhyj on compute-0
Jan 26 09:43:29 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.orrhyj on compute-0
Jan 26 09:43:29 compute-0 sudo[98516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:43:29 compute-0 sudo[98516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:29 compute-0 sudo[98516]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:29 compute-0 sudo[98541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:43:29 compute-0 sudo[98541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:30 compute-0 ceph-mon[74456]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 938 B/s wr, 4 op/s
Jan 26 09:43:30 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:30 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:30 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:31 compute-0 sshd-session[98642]: Invalid user admin from 157.245.76.178 port 38206
Jan 26 09:43:31 compute-0 sshd-session[98642]: Connection closed by invalid user admin 157.245.76.178 port 38206 [preauth]
Jan 26 09:43:31 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:31 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:31 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:31 compute-0 ceph-mon[74456]: Deploying daemon keepalived.nfs.cephfs.compute-0.orrhyj on compute-0
Jan 26 09:43:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:32 compute-0 ceph-mon[74456]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:32 compute-0 podman[98611]: 2026-01-26 09:43:32.658859955 +0000 UTC m=+2.619887765 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 26 09:43:32 compute-0 podman[98611]: 2026-01-26 09:43:32.675794308 +0000 UTC m=+2.636822098 container create 34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1 (image=quay.io/ceph/keepalived:2.2.4, name=xenodochial_kepler, io.buildah.version=1.28.2, version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc.)
Jan 26 09:43:32 compute-0 systemd[1]: Started libpod-conmon-34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1.scope.
Jan 26 09:43:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:32 compute-0 podman[98611]: 2026-01-26 09:43:32.753343577 +0000 UTC m=+2.714371417 container init 34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1 (image=quay.io/ceph/keepalived:2.2.4, name=xenodochial_kepler, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 09:43:32 compute-0 podman[98611]: 2026-01-26 09:43:32.761326725 +0000 UTC m=+2.722354525 container start 34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1 (image=quay.io/ceph/keepalived:2.2.4, name=xenodochial_kepler, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2023-02-22T09:23:20, version=2.2.4, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 26 09:43:32 compute-0 podman[98611]: 2026-01-26 09:43:32.764276969 +0000 UTC m=+2.725304779 container attach 34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1 (image=quay.io/ceph/keepalived:2.2.4, name=xenodochial_kepler, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., version=2.2.4, vcs-type=git)
Jan 26 09:43:32 compute-0 xenodochial_kepler[98709]: 0 0
Jan 26 09:43:32 compute-0 systemd[1]: libpod-34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1.scope: Deactivated successfully.
Jan 26 09:43:32 compute-0 podman[98611]: 2026-01-26 09:43:32.767176271 +0000 UTC m=+2.728204071 container died 34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1 (image=quay.io/ceph/keepalived:2.2.4, name=xenodochial_kepler, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Jan 26 09:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-521e3aa44b69e3847b59b7251793fdeee984268749f6b4dfa236c82c5efbd535-merged.mount: Deactivated successfully.
Jan 26 09:43:32 compute-0 podman[98611]: 2026-01-26 09:43:32.810229118 +0000 UTC m=+2.771256908 container remove 34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1 (image=quay.io/ceph/keepalived:2.2.4, name=xenodochial_kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, name=keepalived, vcs-type=git, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, architecture=x86_64, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Jan 26 09:43:32 compute-0 systemd[1]: libpod-conmon-34dd29e3822c036631187d042477b02101087009e69ea1cd1765b3bc7c2826d1.scope: Deactivated successfully.
Jan 26 09:43:33 compute-0 systemd[1]: Reloading.
Jan 26 09:43:33 compute-0 systemd-rc-local-generator[98756]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:33 compute-0 systemd-sysv-generator[98761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:33 compute-0 ceph-mon[74456]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:33 compute-0 systemd[1]: Reloading.
Jan 26 09:43:33 compute-0 systemd-rc-local-generator[98796]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:33 compute-0 systemd-sysv-generator[98799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:33 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.orrhyj for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9040031e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:34 compute-0 podman[98855]: 2026-01-26 09:43:34.133458329 +0000 UTC m=+0.039998951 container create 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, io.openshift.expose-services=, release=1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, architecture=x86_64)
Jan 26 09:43:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4d516e87780be15f64da2081f5b1dfadb0ad3127cafd45db0f92c86ebfab35/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:34 compute-0 podman[98855]: 2026-01-26 09:43:34.18968101 +0000 UTC m=+0.096221632 container init 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-type=git, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2)
Jan 26 09:43:34 compute-0 podman[98855]: 2026-01-26 09:43:34.19492676 +0000 UTC m=+0.101467352 container start 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, name=keepalived, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Jan 26 09:43:34 compute-0 bash[98855]: 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396
Jan 26 09:43:34 compute-0 podman[98855]: 2026-01-26 09:43:34.113937132 +0000 UTC m=+0.020477754 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 26 09:43:34 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.orrhyj for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: Starting VRRP child process, pid=4
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: Startup complete
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: (VI_0) Entering BACKUP STATE (init)
Jan 26 09:43:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:34 2026: VRRP_Script(check_backend) succeeded
Jan 26 09:43:34 compute-0 sudo[98541]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:43:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:43:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:43:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.ovafut on compute-2
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.ovafut on compute-2
Jan 26 09:43:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9040031e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:36 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:36 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:36 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:36 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:36 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:36 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 26 09:43:36 compute-0 ceph-mon[74456]: Deploying daemon keepalived.nfs.cephfs.compute-2.ovafut on compute-2
Jan 26 09:43:36 compute-0 ceph-mon[74456]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:43:36
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.log', 'images', '.nfs']
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:43:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:43:36 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:43:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 26 09:43:37 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 26 09:43:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 26 09:43:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:43:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:37 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev e8ca8f60-bc7c-4c7e-83ab-4c44d46115cc (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 26 09:43:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.369888) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420617370003, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7323, "num_deletes": 253, "total_data_size": 13199021, "memory_usage": 13613552, "flush_reason": "Manual Compaction"}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420617444849, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11778189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 155, "largest_seqno": 7473, "table_properties": {"data_size": 11751931, "index_size": 16735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81046, "raw_average_key_size": 24, "raw_value_size": 11687276, "raw_average_value_size": 3463, "num_data_blocks": 739, "num_entries": 3374, "num_filter_entries": 3374, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420307, "oldest_key_time": 1769420307, "file_creation_time": 1769420617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 74984 microseconds, and 26687 cpu microseconds.
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.444914) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11778189 bytes OK
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.444940) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.447270) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.447357) EVENT_LOG_v1 {"time_micros": 1769420617447345, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.447385) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13166275, prev total WAL file size 13166275, number of live WAL files 2.
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.450568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(58KB) 8(1944B)]
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420617450660, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11840076, "oldest_snapshot_seqno": -1}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3199 keys, 11822023 bytes, temperature: kUnknown
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420617523536, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11822023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11796095, "index_size": 16858, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 80033, "raw_average_key_size": 25, "raw_value_size": 11732952, "raw_average_value_size": 3667, "num_data_blocks": 745, "num_entries": 3199, "num_filter_entries": 3199, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769420617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.523949) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11822023 bytes
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.525433) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.1 rd, 161.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.3, 0.0 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3488, records dropped: 289 output_compression: NoCompression
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.525480) EVENT_LOG_v1 {"time_micros": 1769420617525446, "job": 4, "event": "compaction_finished", "compaction_time_micros": 73034, "compaction_time_cpu_micros": 24759, "output_level": 6, "num_output_files": 1, "total_output_size": 11822023, "num_input_records": 3488, "num_output_records": 3199, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420617528756, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420617528836, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420617528882, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 26 09:43:37 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:37.450478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:43:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:37 2026: (VI_0) Entering MASTER STATE
Jan 26 09:43:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0033b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 26 09:43:38 compute-0 ceph-mon[74456]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:43:38 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:38 compute-0 ceph-mon[74456]: osdmap e61: 3 total, 3 up, 3 in
Jan 26 09:43:38 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 26 09:43:38 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 26 09:43:38 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev cf93da99-9293-4033-a2c2-d35a716570a2 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 26 09:43:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:43:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Jan 26 09:43:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:43:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:43:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 26 09:43:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:39 compute-0 ceph-mon[74456]: osdmap e62: 3 total, 3 up, 3 in
Jan 26 09:43:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:39 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 26 09:43:39 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev a2f4f755-acc3-4a99-9c67-a522066f69c7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 26 09:43:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:39 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev e819fdee-21b8-43fb-86fd-87a2e9251b37 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 26 09:43:39 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event e819fdee-21b8-43fb-86fd-87a2e9251b37 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 28 seconds
Jan 26 09:43:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 26 09:43:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:39 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev afa46e16-ff31-4351-836f-d3014d049d5a (Updating alertmanager deployment (+1 -> 1))
Jan 26 09:43:39 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Jan 26 09:43:39 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Jan 26 09:43:39 compute-0 sudo[98881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:43:39 compute-0 sudo[98881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:39 compute-0 sudo[98881]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:39 compute-0 sudo[98906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:43:39 compute-0 sudo[98906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9040031e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0033b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v42: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:43:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 26 09:43:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:43:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 63 pg[8.0( v 47'9 (0'0,47'9] local-lis/les=46/47 n=6 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=63 pruub=14.868728638s) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 47'8 mlcod 47'8 active pruub 213.934066772s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 63 pg[9.0( v 60'1159 (0'0,60'1159] local-lis/les=48/49 n=178 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=8.972600937s) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 60'1158 mlcod 60'1158 active pruub 208.038024902s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 63 pg[8.0( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=63 pruub=14.868728638s) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 47'8 mlcod 0'0 unknown pruub 213.934066772s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55c5bd5fe000) operator()   moving buffer(0x55c5be0e72e8 space 0x55c5be186aa0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55c5bd5fe000) operator()   moving buffer(0x55c5be1260c8 space 0x55c5be1864f0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55c5bd5fe000) operator()   moving buffer(0x55c5be12aa28 space 0x55c5be186d10 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x55c5bd5fe000) operator()   moving buffer(0x55c5be0e63e8 space 0x55c5bdfa6900 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 63 pg[9.0( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=8.972600937s) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 60'1158 mlcod 0'0 unknown pruub 208.038024902s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be127f68 space 0x55c5be186f80 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be185248 space 0x55c5be186420 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be1268e8 space 0x55c5be186eb0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be184d48 space 0x55c5bdf10830 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be142168 space 0x55c5bdfdd7a0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be126ca8 space 0x55c5bde95530 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be1503e8 space 0x55c5be187940 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be107388 space 0x55c5bdfddc80 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be127d88 space 0x55c5bcda2b70 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be143928 space 0x55c5bdf11c80 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be13e708 space 0x55c5bdfdd870 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be1714c8 space 0x55c5bdeb0f80 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be154d48 space 0x55c5bdfddae0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be1554c8 space 0x55c5be0af6d0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be126de8 space 0x55c5be1876d0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be1857e8 space 0x55c5bdee04f0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be1276a8 space 0x55c5bdd4e9d0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5bdd53248 space 0x55c5be0af870 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be106528 space 0x55c5be187390 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be126fc8 space 0x55c5bdedf530 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be127928 space 0x55c5be187460 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be126528 space 0x55c5be1871f0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be127608 space 0x55c5bdfdda10 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be142708 space 0x55c5bdfb6de0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be143068 space 0x55c5bde79940 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5bde9fce8 space 0x55c5be0af7a0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be154708 space 0x55c5bdfdd940 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be13ed48 space 0x55c5be186c40 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be106d48 space 0x55c5bdfddbb0 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be126988 space 0x55c5be187a10 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55c5bd372900) operator()   moving buffer(0x55c5be107a68 space 0x55c5bdfe3530 0x0~1000 clean)
Jan 26 09:43:41 compute-0 ceph-mon[74456]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:41 compute-0 ceph-mon[74456]: osdmap e63: 3 total, 3 up, 3 in
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:41 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.13( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.12( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.12( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.11( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.10( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.13( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.11( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.10( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.4( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.5( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.b( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.a( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.6( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.7( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.a( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.17( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.16( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.6( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.7( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.8( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.9( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.b( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.16( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.17( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.4( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.5( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.8( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.9( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.15( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.14( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.f( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.e( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.d( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.c( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.e( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.f( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.c( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.d( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.1( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.1( v 47'9 (0'0,47'9] local-lis/les=46/47 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.2( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.3( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.3( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.2( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.1d( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.1c( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.1c( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.1f( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.1d( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.1e( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.1e( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.19( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.1f( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.18( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.18( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.19( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.1a( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.1b( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.1a( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.1b( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[8.14( v 47'9 lc 0'0 (0'0,47'9] local-lis/les=46/47 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 64 pg[9.15( v 60'1159 lc 0'0 (0'0,60'1159] local-lis/les=48/49 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:41 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 6038409b-9b55-41c1-a036-18e47e43f69f (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 26 09:43:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Jan 26 09:43:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:41 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 17 completed events
Jan 26 09:43:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:43:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:41 compute-0 ceph-mgr[74755]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.883977) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420621884006, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 372, "num_deletes": 251, "total_data_size": 249895, "memory_usage": 258328, "flush_reason": "Manual Compaction"}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420621887095, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 248024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7474, "largest_seqno": 7845, "table_properties": {"data_size": 245642, "index_size": 482, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5757, "raw_average_key_size": 18, "raw_value_size": 240815, "raw_average_value_size": 759, "num_data_blocks": 21, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420617, "oldest_key_time": 1769420617, "file_creation_time": 1769420621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 3156 microseconds, and 1057 cpu microseconds.
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.887131) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 248024 bytes OK
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.887147) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.888539) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.888552) EVENT_LOG_v1 {"time_micros": 1769420621888548, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.888563) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 247400, prev total WAL file size 247400, number of live WAL files 2.
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.888817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(242KB)], [20(11MB)]
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420621888844, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12070047, "oldest_snapshot_seqno": -1}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2997 keys, 10943197 bytes, temperature: kUnknown
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420621957297, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 10943197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10919201, "index_size": 15375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7557, "raw_key_size": 76910, "raw_average_key_size": 25, "raw_value_size": 10860106, "raw_average_value_size": 3623, "num_data_blocks": 673, "num_entries": 2997, "num_filter_entries": 2997, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769420621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.957509) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10943197 bytes
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.958678) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.2 rd, 159.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.3 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(92.8) write-amplify(44.1) OK, records in: 3516, records dropped: 519 output_compression: NoCompression
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.958699) EVENT_LOG_v1 {"time_micros": 1769420621958690, "job": 6, "event": "compaction_finished", "compaction_time_micros": 68519, "compaction_time_cpu_micros": 19065, "output_level": 6, "num_output_files": 1, "total_output_size": 10943197, "num_input_records": 3516, "num_output_records": 2997, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420621958834, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420621960705, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.888774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.960741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.960745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.960747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.960749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:43:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:43:41.960750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:43:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.26654676 +0000 UTC m=+2.163163482 volume create 29f7b3b60119f662b9c200608fa98fb8b2bf58f570ae63ae6ea167af65ee6913
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.275820595 +0000 UTC m=+2.172437317 container create 386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 systemd[1]: Started libpod-conmon-386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839.scope.
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.250469643 +0000 UTC m=+2.147086405 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 26 09:43:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e888ef1438dc5fe523d9a5252ae4b978e681cf5fdd34a31f4f01c33546f2e8ac/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.356158674 +0000 UTC m=+2.252775406 container init 386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.362453273 +0000 UTC m=+2.259069985 container start 386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.36549024 +0000 UTC m=+2.262107002 container attach 386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 nice_swanson[99112]: 65534 65534
Jan 26 09:43:42 compute-0 systemd[1]: libpod-386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839.scope: Deactivated successfully.
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.366400646 +0000 UTC m=+2.263017358 container died 386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e888ef1438dc5fe523d9a5252ae4b978e681cf5fdd34a31f4f01c33546f2e8ac-merged.mount: Deactivated successfully.
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.406773866 +0000 UTC m=+2.303390578 container remove 386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839 (image=quay.io/prometheus/alertmanager:v0.25.0, name=nice_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 podman[98974]: 2026-01-26 09:43:42.409509374 +0000 UTC m=+2.306126096 volume remove 29f7b3b60119f662b9c200608fa98fb8b2bf58f570ae63ae6ea167af65ee6913
Jan 26 09:43:42 compute-0 systemd[1]: libpod-conmon-386edf658b819c3a1f765463c3844a588ecc40f7c6d8eae67ae02494cb311839.scope: Deactivated successfully.
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.4630732 +0000 UTC m=+0.032888338 volume create d1735ff86c250f966839903e3bef43d095ff390e697a2f18ffa77e255c68819b
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.470955334 +0000 UTC m=+0.040770472 container create a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_proskuriakova, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 systemd[1]: Started libpod-conmon-a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35.scope.
Jan 26 09:43:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170fb26c86e196b4263e0eb4639e454684a91f2ac832c3349ff1d21ee62cf4e8/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.531792088 +0000 UTC m=+0.101607236 container init a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_proskuriakova, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.536697207 +0000 UTC m=+0.106512345 container start a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_proskuriakova, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 elated_proskuriakova[99146]: 65534 65534
Jan 26 09:43:42 compute-0 systemd[1]: libpod-a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35.scope: Deactivated successfully.
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.53993097 +0000 UTC m=+0.109746148 container attach a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_proskuriakova, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.54029667 +0000 UTC m=+0.110111818 container died a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_proskuriakova, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.45041516 +0000 UTC m=+0.020230298 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 26 09:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-170fb26c86e196b4263e0eb4639e454684a91f2ac832c3349ff1d21ee62cf4e8-merged.mount: Deactivated successfully.
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.579435896 +0000 UTC m=+0.149251044 container remove a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35 (image=quay.io/prometheus/alertmanager:v0.25.0, name=elated_proskuriakova, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:42 compute-0 podman[99130]: 2026-01-26 09:43:42.584070167 +0000 UTC m=+0.153885315 volume remove d1735ff86c250f966839903e3bef43d095ff390e697a2f18ffa77e255c68819b
Jan 26 09:43:42 compute-0 systemd[1]: libpod-conmon-a2ef3a50d1338fe86072e1a3d0b20505d8be808bfd27bf53ebf81f17af415b35.scope: Deactivated successfully.
Jan 26 09:43:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 26 09:43:42 compute-0 systemd[1]: Reloading.
Jan 26 09:43:42 compute-0 systemd-sysv-generator[99194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:42 compute-0 systemd-rc-local-generator[99191]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v44: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Jan 26 09:43:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:43:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:43:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:42 compute-0 ceph-mon[74456]: Deploying daemon alertmanager.compute-0 on compute-0
Jan 26 09:43:42 compute-0 ceph-mon[74456]: pgmap v42: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:43:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:42 compute-0 ceph-mon[74456]: osdmap e64: 3 total, 3 up, 3 in
Jan 26 09:43:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 09:43:42 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 26 09:43:42 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 207883ff-7065-455c-90d6-8bf364f4f869 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev e8ca8f60-bc7c-4c7e-83ab-4c44d46115cc (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event e8ca8f60-bc7c-4c7e-83ab-4c44d46115cc (PG autoscaler increasing pool 8 PGs from 1 to 32) in 6 seconds
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev cf93da99-9293-4033-a2c2-d35a716570a2 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event cf93da99-9293-4033-a2c2-d35a716570a2 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev a2f4f755-acc3-4a99-9c67-a522066f69c7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event a2f4f755-acc3-4a99-9c67-a522066f69c7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 3 seconds
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 6038409b-9b55-41c1-a036-18e47e43f69f (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 6038409b-9b55-41c1-a036-18e47e43f69f (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 207883ff-7065-455c-90d6-8bf364f4f869 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 26 09:43:42 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 207883ff-7065-455c-90d6-8bf364f4f869 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.12( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.13( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.11( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.10( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.5( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.4( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.b( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.7( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.a( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.17( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.6( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.9( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.16( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.5( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.8( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.14( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.15( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.f( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.d( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.e( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.c( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.c( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.4( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.1( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.0( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 47'8 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.2( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.1( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.0( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 60'1158 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.2( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.3( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.1c( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.1d( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.1c( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.1e( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.1f( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.19( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.18( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.1a( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.1b( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=48/48 les/c/f=49/49/0 sis=63) [0] r=0 lpr=63 pi=[48,63)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 65 pg[8.14( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=46/46 les/c/f=47/47/0 sis=63) [0] r=0 lpr=63 pi=[46,63)/1 crt=47'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:42 compute-0 systemd[1]: Reloading.
Jan 26 09:43:42 compute-0 systemd-rc-local-generator[99228]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:42 compute-0 systemd-sysv-generator[99231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:43 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:43:43 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 26 09:43:43 compute-0 podman[99288]: 2026-01-26 09:43:43.3068455 +0000 UTC m=+0.033880036 volume create 57bd9804c922c4d04bf24455174cb499c61524445e0876249914e58f27264d95
Jan 26 09:43:43 compute-0 podman[99288]: 2026-01-26 09:43:43.315190138 +0000 UTC m=+0.042224664 container create c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f28aed2c445474a08ec8e835cf0e36e79dc86a07288c5541255fc051a52b09/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f28aed2c445474a08ec8e835cf0e36e79dc86a07288c5541255fc051a52b09/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:43 compute-0 podman[99288]: 2026-01-26 09:43:43.375753134 +0000 UTC m=+0.102787700 container init c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:43 compute-0 podman[99288]: 2026-01-26 09:43:43.380772646 +0000 UTC m=+0.107807182 container start c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:43:43 compute-0 bash[99288]: c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a
Jan 26 09:43:43 compute-0 podman[99288]: 2026-01-26 09:43:43.293995134 +0000 UTC m=+0.021029690 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 26 09:43:43 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.403Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.403Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.411Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.412Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 26 09:43:43 compute-0 sudo[98906]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.445Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.445Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.450Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 26 09:43:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:43.450Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev afa46e16-ff31-4351-836f-d3014d049d5a (Updating alertmanager deployment (+1 -> 1))
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event afa46e16-ff31-4351-836f-d3014d049d5a (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev cec580a1-5fb8-4964-b69a-8da7b78f8eca (Updating grafana deployment (+1 -> 1))
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Jan 26 09:43:43 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Jan 26 09:43:43 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 26 09:43:43 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 26 09:43:43 compute-0 sudo[99324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:43:43 compute-0 sudo[99324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:43 compute-0 sudo[99324]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:43 compute-0 sudo[99349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:43:43 compute-0 sudo[99349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:43 compute-0 ceph-mon[74456]: pgmap v44: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 563 B/s rd, 0 op/s
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 26 09:43:43 compute-0 ceph-mon[74456]: osdmap e65: 3 total, 3 up, 3 in
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 26 09:43:43 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 26 09:43:43 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 66 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=9.766932487s) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active pruub 211.235702515s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:43 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 26 09:43:43 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 66 pg[11.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=9.766932487s) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown pruub 211.235702515s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0033b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:44 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 26 09:43:44 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 26 09:43:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v47: 322 pgs: 62 unknown, 64 peering, 196 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Jan 26 09:43:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 26 09:43:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 26 09:43:44 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.10( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.11( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.13( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.12( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.7( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.8( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.9( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.14( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.5( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.4( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.a( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.6( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.15( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.b( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.16( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.c( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.e( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.d( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.f( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.3( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.2( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1f( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1e( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1d( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1c( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1b( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1a( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.19( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.18( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.17( empty local-lis/les=52/53 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.10( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-mon[74456]: Regenerating cephadm self-signed grafana TLS certificates
Jan 26 09:43:44 compute-0 ceph-mon[74456]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 26 09:43:44 compute-0 ceph-mon[74456]: Deploying daemon grafana.compute-0 on compute-0
Jan 26 09:43:44 compute-0 ceph-mon[74456]: 8.12 scrub starts
Jan 26 09:43:44 compute-0 ceph-mon[74456]: 8.12 scrub ok
Jan 26 09:43:44 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:44 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:44 compute-0 ceph-mon[74456]: osdmap e66: 3 total, 3 up, 3 in
Jan 26 09:43:44 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.11( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.13( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.7( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.12( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.8( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.5( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.9( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.4( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.a( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.14( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.6( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.15( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.b( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.16( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.c( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.e( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.d( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.f( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.3( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.2( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.0( empty local-lis/les=66/67 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1e( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1c( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1d( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1b( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.19( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1a( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.1f( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.17( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 67 pg[11.18( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=52/52 les/c/f=53/53/0 sis=66) [0] r=0 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:45.413Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000893938s
Jan 26 09:43:45 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 26 09:43:45 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 26 09:43:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 26 09:43:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 26 09:43:45 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 26 09:43:45 compute-0 ceph-mon[74456]: 10.1b scrub starts
Jan 26 09:43:45 compute-0 ceph-mon[74456]: 10.1b scrub ok
Jan 26 09:43:45 compute-0 ceph-mon[74456]: 8.13 scrub starts
Jan 26 09:43:45 compute-0 ceph-mon[74456]: 8.13 scrub ok
Jan 26 09:43:45 compute-0 ceph-mon[74456]: pgmap v47: 322 pgs: 62 unknown, 64 peering, 196 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:45 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 09:43:45 compute-0 ceph-mon[74456]: osdmap e67: 3 total, 3 up, 3 in
Jan 26 09:43:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9040042e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0033b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:46 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 26 09:43:46 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 26 09:43:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 93 unknown, 64 peering, 196 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:46 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 23 completed events
Jan 26 09:43:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:43:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:46 compute-0 ceph-mon[74456]: 10.12 scrub starts
Jan 26 09:43:46 compute-0 ceph-mon[74456]: 10.12 scrub ok
Jan 26 09:43:46 compute-0 ceph-mon[74456]: 9.10 scrub starts
Jan 26 09:43:46 compute-0 ceph-mon[74456]: 9.10 scrub ok
Jan 26 09:43:46 compute-0 ceph-mon[74456]: osdmap e68: 3 total, 3 up, 3 in
Jan 26 09:43:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:47 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Jan 26 09:43:47 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Jan 26 09:43:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9040042e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0033b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:48 compute-0 ceph-mon[74456]: 10.11 scrub starts
Jan 26 09:43:48 compute-0 ceph-mon[74456]: 10.11 scrub ok
Jan 26 09:43:48 compute-0 ceph-mon[74456]: 8.11 scrub starts
Jan 26 09:43:48 compute-0 ceph-mon[74456]: 8.11 scrub ok
Jan 26 09:43:48 compute-0 ceph-mon[74456]: pgmap v50: 353 pgs: 93 unknown, 64 peering, 196 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:48 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:48 compute-0 ceph-mon[74456]: 8.10 deep-scrub starts
Jan 26 09:43:48 compute-0 ceph-mon[74456]: 8.10 deep-scrub ok
Jan 26 09:43:48 compute-0 sudo[99624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhrgimoksyeeumtkxfykenfxkhypllr ; /usr/bin/python3'
Jan 26 09:43:48 compute-0 sudo[99624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:43:48 compute-0 python3[99626]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:43:48 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 26 09:43:48 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 26 09:43:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:43:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:43:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:43:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 26 09:43:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 26 09:43:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:43:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 26 09:43:49 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 26 09:43:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:49 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 26 09:43:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 26 09:43:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 26 09:43:49 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.10( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.19( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-mon[74456]: 10.1f deep-scrub starts
Jan 26 09:43:49 compute-0 ceph-mon[74456]: 10.1f deep-scrub ok
Jan 26 09:43:49 compute-0 ceph-mon[74456]: 10.7 scrub starts
Jan 26 09:43:49 compute-0 ceph-mon[74456]: 10.7 scrub ok
Jan 26 09:43:49 compute-0 ceph-mon[74456]: 9.11 scrub starts
Jan 26 09:43:49 compute-0 ceph-mon[74456]: 9.11 scrub ok
Jan 26 09:43:49 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:49 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:49 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:49 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 26 09:43:49 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.18( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.1b( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.1c( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.19( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.6( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.2( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.8( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.a( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.b( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.c( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.12( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.14( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.15( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.8( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[12.e( empty local-lis/les=0/0 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.5( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[10.13( empty local-lis/les=0/0 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.12( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121469498s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.461380005s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.12( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121431351s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.461380005s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.12( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.172377586s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512420654s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.12( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.172350883s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512420654s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.11( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121253014s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.461502075s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.11( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121232986s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.461502075s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.13( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.172027588s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512481689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.13( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.172007561s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512481689s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.10( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121068001s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.461547852s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.10( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121048927s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.461547852s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.7( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171778679s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512512207s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.7( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171758652s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512512207s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.4( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.120795250s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.461608887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.4( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.120764732s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.461608887s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.8( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171683311s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512603760s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.8( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171666145s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512603760s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.b( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123970985s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465087891s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.4( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171585083s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512710571s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.b( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123951912s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465087891s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.4( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171564102s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512710571s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.14( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171275139s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512786865s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.14( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.171257019s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512786865s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.a( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123519897s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465148926s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.a( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123499870s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465148926s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.17( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123551369s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465225220s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.17( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123524666s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465225220s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.5( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.170780182s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512634277s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.5( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.170762062s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512634277s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.6( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123394012s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465286255s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.6( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123376846s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465286255s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.a( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.170582771s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512741089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.9( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123163223s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465332031s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.a( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.170560837s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512741089s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.9( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.123142242s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465332031s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.16( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122832298s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465393066s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.8( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122806549s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465408325s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.16( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.170264244s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512908936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.8( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122777939s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465408325s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.16( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122742653s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465393066s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.5( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122735977s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465408325s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.16( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.170243263s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512908936s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.5( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122687340s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465408325s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.15( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122612953s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465469360s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.15( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122594833s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465469360s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.f( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122529984s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465469360s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.f( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122512817s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465469360s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.e( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169964790s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.512954712s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.e( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169950485s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.512954712s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.d( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122427940s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.465515137s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.d( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122404099s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.465515137s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.f( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169841766s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513031006s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.f( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169826508s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513031006s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.c( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122761726s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466140747s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.3( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169686317s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513107300s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.c( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122739792s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466140747s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.3( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169665337s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513107300s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169526100s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513153076s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169508934s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513153076s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.2( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122489929s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466232300s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.3( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122539520s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466308594s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.2( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122468948s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466232300s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.3( v 47'9 (0'0,47'9] local-lis/les=63/65 n=1 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122516632s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466308594s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.1c( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122466087s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466400146s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.1c( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122447014s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466400146s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1e( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169171333s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513183594s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1d( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169180870s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513214111s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1e( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169138908s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513183594s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1d( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169158936s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513214111s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1c( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169028282s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513198853s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1c( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.169008255s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513198853s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.1f( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122104645s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466552734s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.18( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122106552s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466583252s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.1f( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122087479s) [2] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466552734s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.18( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.122087479s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466583252s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1b( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168642044s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513229370s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1b( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168620110s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513229370s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1a( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168575287s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513259888s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.1a( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168559074s) [1] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513259888s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.19( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168402672s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513244629s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.19( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168383598s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513244629s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.19( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121656418s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466567993s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.19( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121637344s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466567993s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.17( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168184280s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 active pruub 218.513290405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.1b( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121517181s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466644287s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[11.17( empty local-lis/les=66/67 n=0 ec=66/52 lis/c=66/66 les/c/f=67/67/0 sis=69 pruub=11.168162346s) [2] r=-1 lpr=69 pi=[66,69)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 218.513290405s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.1b( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121495247s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466644287s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.14( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121509552s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 active pruub 216.466705322s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:49 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 69 pg[8.14( v 47'9 (0'0,47'9] local-lis/les=63/65 n=0 ec=63/46 lis/c=63/63 les/c/f=65/65/0 sis=69 pruub=9.121490479s) [1] r=-1 lpr=69 pi=[63,69)/1 crt=47'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.466705322s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:49 compute-0 podman[99627]: 2026-01-26 09:43:49.813451541 +0000 UTC m=+1.129385268 container create 87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4 (image=quay.io/ceph/ceph:v19, name=sleepy_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:43:49 compute-0 systemd[1]: Started libpod-conmon-87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4.scope.
Jan 26 09:43:49 compute-0 podman[99415]: 2026-01-26 09:43:49.857790035 +0000 UTC m=+5.621790583 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 26 09:43:49 compute-0 podman[99627]: 2026-01-26 09:43:49.79163854 +0000 UTC m=+1.107572297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:43:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f058b5f00acaaf89298c107c38dcf08e5a36dcfa441cb7f9b4d01c105899def/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f058b5f00acaaf89298c107c38dcf08e5a36dcfa441cb7f9b4d01c105899def/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:50 compute-0 podman[99415]: 2026-01-26 09:43:50.111397951 +0000 UTC m=+5.875398549 container create 08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798 (image=quay.io/ceph/grafana:10.4.0, name=cranky_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:50 compute-0 podman[99627]: 2026-01-26 09:43:50.150433843 +0000 UTC m=+1.466367610 container init 87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4 (image=quay.io/ceph/ceph:v19, name=sleepy_germain, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:43:50 compute-0 systemd[1]: Started libpod-conmon-08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798.scope.
Jan 26 09:43:50 compute-0 podman[99627]: 2026-01-26 09:43:50.160997544 +0000 UTC m=+1.476931281 container start 87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4 (image=quay.io/ceph/ceph:v19, name=sleepy_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:43:50 compute-0 podman[99627]: 2026-01-26 09:43:50.164859414 +0000 UTC m=+1.480793151 container attach 87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4 (image=quay.io/ceph/ceph:v19, name=sleepy_germain, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:43:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:50 compute-0 podman[99415]: 2026-01-26 09:43:50.252441339 +0000 UTC m=+6.016441887 container init 08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798 (image=quay.io/ceph/grafana:10.4.0, name=cranky_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 podman[99415]: 2026-01-26 09:43:50.2591334 +0000 UTC m=+6.023133948 container start 08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798 (image=quay.io/ceph/grafana:10.4.0, name=cranky_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 cranky_hodgkin[99692]: 472 0
Jan 26 09:43:50 compute-0 systemd[1]: libpod-08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798.scope: Deactivated successfully.
Jan 26 09:43:50 compute-0 podman[99415]: 2026-01-26 09:43:50.297149013 +0000 UTC m=+6.061149571 container attach 08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798 (image=quay.io/ceph/grafana:10.4.0, name=cranky_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 podman[99415]: 2026-01-26 09:43:50.297866893 +0000 UTC m=+6.061867441 container died 08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798 (image=quay.io/ceph/grafana:10.4.0, name=cranky_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0925b488ecb679cadca73a8bdd6930389271170cd2744279a2e24fd8b050be80-merged.mount: Deactivated successfully.
Jan 26 09:43:50 compute-0 podman[99415]: 2026-01-26 09:43:50.350573585 +0000 UTC m=+6.114574133 container remove 08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798 (image=quay.io/ceph/grafana:10.4.0, name=cranky_hodgkin, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 systemd[1]: libpod-conmon-08ca94465b8c3d1d1a81374399e30ce40b4ccd28e283770039ffcaebb29be798.scope: Deactivated successfully.
Jan 26 09:43:50 compute-0 podman[99740]: 2026-01-26 09:43:50.455801193 +0000 UTC m=+0.066277769 container create e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28 (image=quay.io/ceph/grafana:10.4.0, name=nervous_haslett, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 systemd[1]: Started libpod-conmon-e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28.scope.
Jan 26 09:43:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:50 compute-0 podman[99740]: 2026-01-26 09:43:50.420764495 +0000 UTC m=+0.031241091 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 26 09:43:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 26 09:43:50 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 26 09:43:50 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 26 09:43:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 26 09:43:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 26 09:43:50 compute-0 podman[99740]: 2026-01-26 09:43:50.85195348 +0000 UTC m=+0.462430076 container init e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28 (image=quay.io/ceph/grafana:10.4.0, name=nervous_haslett, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 26 09:43:50 compute-0 podman[99740]: 2026-01-26 09:43:50.859032492 +0000 UTC m=+0.469509068 container start e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28 (image=quay.io/ceph/grafana:10.4.0, name=nervous_haslett, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 nervous_haslett[99757]: 472 0
Jan 26 09:43:50 compute-0 systemd[1]: libpod-e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28.scope: Deactivated successfully.
Jan 26 09:43:50 compute-0 podman[99740]: 2026-01-26 09:43:50.865179418 +0000 UTC m=+0.475656004 container attach e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28 (image=quay.io/ceph/grafana:10.4.0, name=nervous_haslett, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 podman[99740]: 2026-01-26 09:43:50.865537748 +0000 UTC m=+0.476014334 container died e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28 (image=quay.io/ceph/grafana:10.4.0, name=nervous_haslett, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.5( v 60'48 (0'0,60'48] local-lis/les=69/70 n=1 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=60'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.13( v 60'48 (0'0,60'48] local-lis/les=69/70 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=60'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-mon[74456]: pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:50 compute-0 ceph-mon[74456]: 10.1a scrub starts
Jan 26 09:43:50 compute-0 ceph-mon[74456]: 10.1a scrub ok
Jan 26 09:43:50 compute-0 ceph-mon[74456]: 9.5 scrub starts
Jan 26 09:43:50 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:50 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:50 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:50 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 26 09:43:50 compute-0 ceph-mon[74456]: 9.5 scrub ok
Jan 26 09:43:50 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:43:50 compute-0 ceph-mon[74456]: osdmap e69: 3 total, 3 up, 3 in
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.e( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.12( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.14( v 66'51 lc 60'45 (0'0,66'51] local-lis/les=69/70 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=66'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.c( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.15( v 66'51 lc 60'37 (0'0,66'51] local-lis/les=69/70 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=66'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.b( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.a( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.2( v 60'48 (0'0,60'48] local-lis/les=69/70 n=1 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=60'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.8( v 60'48 (0'0,60'48] local-lis/les=69/70 n=1 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=60'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.6( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.19( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.1c( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.8( v 60'46 (0'0,60'46] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=60'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.1b( v 60'48 (0'0,60'48] local-lis/les=69/70 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=60'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.19( v 60'48 (0'0,60'48] local-lis/les=69/70 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=60'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[10.18( v 60'48 (0'0,60'48] local-lis/les=69/70 n=0 ec=65/50 lis/c=65/65 les/c/f=66/66/0 sis=69) [0] r=0 lpr=69 pi=[65,69)/1 crt=60'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 70 pg[12.10( v 68'49 lc 60'14 (0'0,68'49] local-lis/les=69/70 n=0 ec=67/58 lis/c=67/67 les/c/f=68/68/0 sis=69) [0] r=0 lpr=69 pi=[67,69)/1 crt=68'49 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7308daa7b3b27e3f4a267cb10cd22c9f480f13e613288a7545a78413aec9d26b-merged.mount: Deactivated successfully.
Jan 26 09:43:50 compute-0 podman[99740]: 2026-01-26 09:43:50.905646331 +0000 UTC m=+0.516122907 container remove e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28 (image=quay.io/ceph/grafana:10.4.0, name=nervous_haslett, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:50 compute-0 systemd[1]: libpod-conmon-e428a68cfa73f56a0623bd0e3cdbf0e6451b24f8fde0fd272699e2c862eebc28.scope: Deactivated successfully.
Jan 26 09:43:51 compute-0 systemd[1]: Reloading.
Jan 26 09:43:51 compute-0 sleepy_germain[99685]: could not fetch user info: no user info saved
Jan 26 09:43:51 compute-0 systemd-rc-local-generator[99850]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:51 compute-0 systemd-sysv-generator[99859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:51 compute-0 podman[99627]: 2026-01-26 09:43:51.175442416 +0000 UTC m=+2.491376143 container died 87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4 (image=quay.io/ceph/ceph:v19, name=sleepy_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 09:43:51 compute-0 systemd[1]: libpod-87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4.scope: Deactivated successfully.
Jan 26 09:43:51 compute-0 systemd[1]: Reloading.
Jan 26 09:43:51 compute-0 systemd-rc-local-generator[99902]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:51 compute-0 systemd-sysv-generator[99908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:51 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 26 09:43:51 compute-0 podman[99627]: 2026-01-26 09:43:51.658235752 +0000 UTC m=+2.974169469 container remove 87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4 (image=quay.io/ceph/ceph:v19, name=sleepy_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:43:51 compute-0 sudo[99624]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:51 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 26 09:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f058b5f00acaaf89298c107c38dcf08e5a36dcfa441cb7f9b4d01c105899def-merged.mount: Deactivated successfully.
Jan 26 09:43:51 compute-0 systemd[1]: libpod-conmon-87db4885d5ca77ef3999f691e3eca6940da6d501a703256c13e024e419b1f3c4.scope: Deactivated successfully.
Jan 26 09:43:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 26 09:43:51 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 26 09:43:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 26 09:43:51 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:43:51 compute-0 sudo[99943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpicqopilokyhgkdmbttzgzmybphpgqk ; /usr/bin/python3'
Jan 26 09:43:51 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 26 09:43:51 compute-0 sudo[99943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:43:51 compute-0 ceph-mon[74456]: 10.16 scrub starts
Jan 26 09:43:51 compute-0 ceph-mon[74456]: 10.16 scrub ok
Jan 26 09:43:51 compute-0 ceph-mon[74456]: 11.10 scrub starts
Jan 26 09:43:51 compute-0 ceph-mon[74456]: 11.10 scrub ok
Jan 26 09:43:51 compute-0 ceph-mon[74456]: pgmap v53: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:51 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 26 09:43:51 compute-0 ceph-mon[74456]: osdmap e70: 3 total, 3 up, 3 in
Jan 26 09:43:51 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 26 09:43:51 compute-0 ceph-mon[74456]: osdmap e71: 3 total, 3 up, 3 in
Jan 26 09:43:51 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event a30cde41-70a7-4b90-aa70-5440705b0f73 (Global Recovery Event) in 10 seconds
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:52 compute-0 python3[99949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:52 compute-0 podman[99990]: 2026-01-26 09:43:52.10674579 +0000 UTC m=+0.052876887 container create 37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af (image=quay.io/ceph/ceph:v19, name=stoic_goldstine, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 09:43:52 compute-0 podman[99995]: 2026-01-26 09:43:52.123016734 +0000 UTC m=+0.061103492 container create 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:52 compute-0 systemd[1]: Started libpod-conmon-37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af.scope.
Jan 26 09:43:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:52 compute-0 podman[99990]: 2026-01-26 09:43:52.085300449 +0000 UTC m=+0.031431567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 26 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52e8977ccad07210ba1d1e5600fd9d902eaa5086306f5d7b89354b0c97196f2/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52e8977ccad07210ba1d1e5600fd9d902eaa5086306f5d7b89354b0c97196f2/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52e8977ccad07210ba1d1e5600fd9d902eaa5086306f5d7b89354b0c97196f2/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327f70eddb0dd77621ca8360537254a3ae03d6d8b1fa2fbd589a75838890d4ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327f70eddb0dd77621ca8360537254a3ae03d6d8b1fa2fbd589a75838890d4ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52e8977ccad07210ba1d1e5600fd9d902eaa5086306f5d7b89354b0c97196f2/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52e8977ccad07210ba1d1e5600fd9d902eaa5086306f5d7b89354b0c97196f2/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:52 compute-0 podman[99995]: 2026-01-26 09:43:52.089019006 +0000 UTC m=+0.027105774 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 26 09:43:52 compute-0 podman[99995]: 2026-01-26 09:43:52.200730019 +0000 UTC m=+0.138816787 container init 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:52 compute-0 podman[99995]: 2026-01-26 09:43:52.210136226 +0000 UTC m=+0.148222974 container start 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:43:52 compute-0 podman[99990]: 2026-01-26 09:43:52.210814926 +0000 UTC m=+0.156946043 container init 37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af (image=quay.io/ceph/ceph:v19, name=stoic_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:43:52 compute-0 bash[99995]: 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a
Jan 26 09:43:52 compute-0 podman[99990]: 2026-01-26 09:43:52.220031108 +0000 UTC m=+0.166162205 container start 37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af (image=quay.io/ceph/ceph:v19, name=stoic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 09:43:52 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:43:52 compute-0 podman[99990]: 2026-01-26 09:43:52.225533135 +0000 UTC m=+0.171664272 container attach 37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af (image=quay.io/ceph/ceph:v19, name=stoic_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:43:52 compute-0 sudo[99349]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:43:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:43:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 26 09:43:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev cec580a1-5fb8-4964-b69a-8da7b78f8eca (Updating grafana deployment (+1 -> 1))
Jan 26 09:43:52 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event cec580a1-5fb8-4964-b69a-8da7b78f8eca (Updating grafana deployment (+1 -> 1)) in 9 seconds
Jan 26 09:43:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 26 09:43:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev c33f7d9c-a41b-4a4f-8a21-a13e4926bcdf (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 26 09:43:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Jan 26 09:43:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.ovxbdp on compute-0
Jan 26 09:43:52 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.ovxbdp on compute-0
Jan 26 09:43:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:52 compute-0 sudo[100123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:43:52 compute-0 sudo[100123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.40787024Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-26T09:43:52Z
Jan 26 09:43:52 compute-0 sudo[100123]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408315753Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408325943Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408331293Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408335784Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408340344Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408344754Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408349374Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408354514Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408359124Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408362924Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408367104Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408375695Z level=info msg=Target target=[all]
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408386475Z level=info msg="Path Home" path=/usr/share/grafana
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408390385Z level=info msg="Path Data" path=/var/lib/grafana
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408394085Z level=info msg="Path Logs" path=/var/log/grafana
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408397875Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408401715Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=settings t=2026-01-26T09:43:52.408405655Z level=info msg="App mode production"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore t=2026-01-26T09:43:52.40894478Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore t=2026-01-26T09:43:52.408971041Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.410312309Z level=info msg="Starting DB migrations"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.411578826Z level=info msg="Executing migration" id="create migration_log table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.412582774Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.003378ms
Jan 26 09:43:52 compute-0 sudo[100148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:43:52 compute-0 sudo[100148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.513718656Z level=info msg="Executing migration" id="create user table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.515179237Z level=info msg="Migration successfully executed" id="create user table" duration=1.476562ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.519065748Z level=info msg="Executing migration" id="add unique index user.login"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.52089364Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.832862ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.523759472Z level=info msg="Executing migration" id="add unique index user.email"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.524913805Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.151353ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.527964981Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.529072463Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.109842ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.531651087Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.532922513Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.257406ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.535776614Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.539253813Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.472169ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.541868148Z level=info msg="Executing migration" id="create user table v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.542818315Z level=info msg="Migration successfully executed" id="create user table v2" duration=951.117µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.54477259Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.545751458Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=979.438µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.547590451Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.548361323Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=770.432µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.550649328Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.551038779Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=390.072µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.55284445Z level=info msg="Executing migration" id="Drop old table user_v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.554117877Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.265867ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.555661631Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.556890866Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.228305ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.558534713Z level=info msg="Executing migration" id="Update user table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.558565733Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.641µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.560335514Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.56126841Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=933.026µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.56299875Z level=info msg="Executing migration" id="Add missing user data"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.563169874Z level=info msg="Migration successfully executed" id="Add missing user data" duration=171.234µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.565079219Z level=info msg="Executing migration" id="Add is_disabled column to user"
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]: {
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.565929183Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=850.154µs
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "user_id": "openstack",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "display_name": "openstack",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "email": "",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "suspended": 0,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "max_buckets": 1000,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "subusers": [],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "keys": [
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         {
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:             "user": "openstack",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:             "access_key": "MALTSTE96D8NBL3945BQ",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:             "secret_key": "hGvJ7u0ptqN4qWY2Xn3UKZh7WKTYDQ7lLhKDs8Lb",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:             "active": true,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:             "create_date": "2026-01-26T09:43:52.552791Z"
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         }
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     ],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "swift_keys": [],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "caps": [],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "op_mask": "read, write, delete",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "default_placement": "",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "default_storage_class": "",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "placement_tags": [],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "bucket_quota": {
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "enabled": false,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "check_on_raw": false,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "max_size": -1,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "max_size_kb": 0,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "max_objects": -1
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     },
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "user_quota": {
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "enabled": false,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "check_on_raw": false,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "max_size": -1,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "max_size_kb": 0,
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:         "max_objects": -1
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     },
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "temp_url_keys": [],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "type": "rgw",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "mfa_ids": [],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "account_id": "",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "path": "/",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "create_date": "2026-01-26T09:43:52.551807Z",
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "tags": [],
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]:     "group_ids": []
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]: }
Jan 26 09:43:52 compute-0 stoic_goldstine[100024]: 
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.56756286Z level=info msg="Executing migration" id="Add index user.login/user.email"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.568108606Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=540.965µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.570001349Z level=info msg="Executing migration" id="Add is_service_account column to user"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.570923446Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=926.797µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.572762958Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.580020184Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.255736ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.582056913Z level=info msg="Executing migration" id="Add uid column to user"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.58301108Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=939.517µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.584553014Z level=info msg="Executing migration" id="Update uid column values for users"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.584714129Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=160.865µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.586301794Z level=info msg="Executing migration" id="Add unique index user_uid"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.58689623Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=594.266µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.589074633Z level=info msg="Executing migration" id="create temp user table v1-7"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.589719851Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=644.938µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.591686877Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.592225772Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=533.605µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.594045414Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.594703123Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=657.219µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.596814533Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.597449262Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=627.398µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.5998446Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.600837628Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=996.768µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.603064331Z level=info msg="Executing migration" id="Update temp_user table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.603086822Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=23.301µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.60479307Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.605611364Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=817.584µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.607612491Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.608331462Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=734.522µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.610231386Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Jan 26 09:43:52 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.611112231Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=881.366µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.613343054Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Jan 26 09:43:52 compute-0 systemd[1]: libpod-37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af.scope: Deactivated successfully.
Jan 26 09:43:52 compute-0 podman[99990]: 2026-01-26 09:43:52.61424517 +0000 UTC m=+0.560376277 container died 37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af (image=quay.io/ceph/ceph:v19, name=stoic_goldstine, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.613979443Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=636.839µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.616452062Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.620441646Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.989634ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.622508856Z level=info msg="Executing migration" id="create temp_user v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.623442192Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=930.405µs
Jan 26 09:43:52 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.625327596Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.626065546Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=731.65µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.627957311Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.628878387Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=915.855µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.6310871Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.63180539Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=718.62µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.633562231Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.634224289Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=661.408µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.63633661Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.636676909Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=340.579µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.638145681Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.638707567Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=562.466µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.640426216Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.640788966Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=362.84µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.642370101Z level=info msg="Executing migration" id="create star table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.643658908Z level=info msg="Migration successfully executed" id="create star table" duration=1.288217ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.645282364Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.645926992Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=644.728µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.649216276Z level=info msg="Executing migration" id="create org table v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.65007829Z level=info msg="Migration successfully executed" id="create org table v1" duration=861.034µs
Jan 26 09:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-327f70eddb0dd77621ca8360537254a3ae03d6d8b1fa2fbd589a75838890d4ba-merged.mount: Deactivated successfully.
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.657275216Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.658867752Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.598735ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.662584127Z level=info msg="Executing migration" id="create org_user table v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.663866843Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.279856ms
Jan 26 09:43:52 compute-0 podman[99990]: 2026-01-26 09:43:52.664872712 +0000 UTC m=+0.611003809 container remove 37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af (image=quay.io/ceph/ceph:v19, name=stoic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.666392626Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.667756724Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.364059ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.670428621Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.672039776Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.610535ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.675022551Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.676175064Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.157443ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.681973299Z level=info msg="Executing migration" id="Update org table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.682170425Z level=info msg="Migration successfully executed" id="Update org table charset" duration=201.806µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.684298676Z level=info msg="Executing migration" id="Update org_user table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.684359518Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=61.402µs
Jan 26 09:43:52 compute-0 systemd[1]: libpod-conmon-37543b306a69fa2eced8cd2ff49cd5846da81b5c651e89baeb66035e7f8919af.scope: Deactivated successfully.
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.686048156Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.686323883Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=274.937µs
Jan 26 09:43:52 compute-0 sudo[99943]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.687958061Z level=info msg="Executing migration" id="create dashboard table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.688932118Z level=info msg="Migration successfully executed" id="create dashboard table" duration=995.979µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.691223983Z level=info msg="Executing migration" id="add index dashboard.account_id"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.692131419Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=908.065µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.694810825Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.695562677Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=751.722µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.697555673Z level=info msg="Executing migration" id="create dashboard_tag table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.69812716Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=571.057µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.700005663Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.700657342Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=651.449µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.703603226Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.704996716Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.39855ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.706933581Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.711444599Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.488707ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.713165479Z level=info msg="Executing migration" id="create dashboard v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.713835397Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=669.788µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.715674079Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.716308328Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=634.149µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.718788489Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.719454747Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=665.958µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.723318988Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.723700608Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=382.05µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.725372516Z level=info msg="Executing migration" id="drop table dashboard_v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.726234361Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=861.945µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.728154135Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.728264808Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=111.413µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.729810472Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.731210782Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.39999ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.732908331Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Jan 26 09:43:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 26 09:43:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.735834924Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.926373ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.738264183Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.739936151Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.671468ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.741836245Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.742489763Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=653.538µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.744335217Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.746180909Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.845442ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.748707141Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.750025309Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.321808ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.752127108Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.752934311Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=806.823µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.755610238Z level=info msg="Executing migration" id="Update dashboard table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.75569338Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=83.972µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.75780435Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.757869582Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=67.621µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.759721065Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.762434092Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.713087ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.764977175Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.767507146Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.530391ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.769483053Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.77218226Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.697897ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.774594758Z level=info msg="Executing migration" id="Add column uid in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.777044838Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.44359ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.779584181Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.779847278Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=263.467µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.781761982Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.782647488Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=885.296µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.785460908Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.786496478Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.03497ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.78869491Z level=info msg="Executing migration" id="Update dashboard title length"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.788784883Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=88.293µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.791492529Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.792621882Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.129653ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.79501908Z level=info msg="Executing migration" id="create dashboard_provisioning"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.796528713Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.510373ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.801776443Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.80799655Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.216397ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.810366058Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.811218332Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=851.754µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.813838666Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.814830145Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=991.879µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.817331396Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.818332765Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.002219ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.820473715Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.820801425Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=327.57µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.822393481Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.82308166Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=687.979µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.825077526Z level=info msg="Executing migration" id="Add check_sum column"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.827037113Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.959057ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.8290523Z level=info msg="Executing migration" id="Add index for dashboard_title"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.829944656Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=892.525µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.831910021Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.83217876Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=269.309µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.834156136Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.834491985Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=336.6µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.836313196Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.837229653Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=916.507µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.839679293Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.841467994Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.788241ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.843048198Z level=info msg="Executing migration" id="create data_source table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.843871512Z level=info msg="Migration successfully executed" id="create data_source table" duration=822.974µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.84591167Z level=info msg="Executing migration" id="add index data_source.account_id"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.84658436Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=672.63µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.848752441Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.849498563Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=745.432µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.85152673Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.852390055Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=863.855µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.854578458Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.855511364Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=933.376µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.857419079Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.861984849Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.565099ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.863842851Z level=info msg="Executing migration" id="create data_source table v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.864641614Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=798.823µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.866500187Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.867240639Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=737.751µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.868968948Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.869682427Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=713.819µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.872121107Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.872739374Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=619.007µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.874419553Z level=info msg="Executing migration" id="Add column with_credentials"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.876171983Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.75242ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.87818461Z level=info msg="Executing migration" id="Add secure json data column"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.879995632Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.810673ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.881597027Z level=info msg="Executing migration" id="Update data_source table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.881658929Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=82.402µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.883479191Z level=info msg="Executing migration" id="Update initial version to 1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.883686976Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=208.845µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.885668243Z level=info msg="Executing migration" id="Add read_only data column"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.887527896Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.860343ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.889247705Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.889533193Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=285.248µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.891100358Z level=info msg="Executing migration" id="Update json_data with nulls"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.891319984Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=220.296µs
Jan 26 09:43:52 compute-0 ceph-mon[74456]: 10.17 scrub starts
Jan 26 09:43:52 compute-0 ceph-mon[74456]: 10.17 scrub ok
Jan 26 09:43:52 compute-0 ceph-mon[74456]: 9.12 scrub starts
Jan 26 09:43:52 compute-0 ceph-mon[74456]: 9.12 scrub ok
Jan 26 09:43:52 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:52 compute-0 ceph-mon[74456]: Deploying daemon haproxy.rgw.default.compute-0.ovxbdp on compute-0
Jan 26 09:43:52 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.89469824Z level=info msg="Executing migration" id="Add uid column"
Jan 26 09:43:52 compute-0 podman[100232]: 2026-01-26 09:43:52.89608108 +0000 UTC m=+0.046377193 container create 3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095 (image=quay.io/ceph/haproxy:2.3, name=suspicious_darwin)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.897542131Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.848821ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.899394285Z level=info msg="Executing migration" id="Update uid value"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.899626001Z level=info msg="Migration successfully executed" id="Update uid value" duration=232.896µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.902092891Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.903606595Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.514665ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.905490068Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.906156926Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=666.818µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.908594666Z level=info msg="Executing migration" id="create api_key table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.909382989Z level=info msg="Migration successfully executed" id="create api_key table" duration=788.573µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.911758876Z level=info msg="Executing migration" id="add index api_key.account_id"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.912677843Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=918.857µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.914596237Z level=info msg="Executing migration" id="add index api_key.key"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.915249596Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=655.839µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.918331424Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.919097976Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=766.892µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.921333129Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.92203429Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=700.871µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.923876092Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.92454683Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=668.298µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.926303461Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.927018631Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=714.94µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.929021019Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Jan 26 09:43:52 compute-0 systemd[1]: Started libpod-conmon-3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095.scope.
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.933718032Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.696433ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.935440941Z level=info msg="Executing migration" id="create api_key table v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.936136091Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=695.66µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.938026205Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.938712685Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=686.96µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.940404273Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.941086362Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=682.309µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.943344396Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.943977194Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=632.748µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.946312131Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.946872766Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=561.565µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.948415721Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.948949056Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=533.825µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.950596203Z level=info msg="Executing migration" id="Update api_key table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.950655765Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=59.481µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.952133097Z level=info msg="Executing migration" id="Add expires to api_key table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.953927287Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.794281ms
Jan 26 09:43:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.955570375Z level=info msg="Executing migration" id="Add service account foreign key"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.957483659Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.913654ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.959005912Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.959175967Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=170.625µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.961274967Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.96349602Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.221163ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.965452527Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.968461742Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.010066ms
Jan 26 09:43:52 compute-0 podman[100232]: 2026-01-26 09:43:52.968888834 +0000 UTC m=+0.119184977 container init 3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095 (image=quay.io/ceph/haproxy:2.3, name=suspicious_darwin)
Jan 26 09:43:52 compute-0 podman[100232]: 2026-01-26 09:43:52.874537646 +0000 UTC m=+0.024833779 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.97049978Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.971529199Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.029789ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.973337101Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.973908187Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=571.276µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.975947945Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.977029156Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.080941ms
Jan 26 09:43:52 compute-0 podman[100232]: 2026-01-26 09:43:52.977275653 +0000 UTC m=+0.127571766 container start 3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095 (image=quay.io/ceph/haproxy:2.3, name=suspicious_darwin)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.979525177Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.980353631Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=832.383µs
Jan 26 09:43:52 compute-0 podman[100232]: 2026-01-26 09:43:52.981141253 +0000 UTC m=+0.131437396 container attach 3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095 (image=quay.io/ceph/haproxy:2.3, name=suspicious_darwin)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.982928584Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.983847161Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=918.916µs
Jan 26 09:43:52 compute-0 suspicious_darwin[100246]: 0 0
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.986026472Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Jan 26 09:43:52 compute-0 systemd[1]: libpod-3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095.scope: Deactivated successfully.
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.986712672Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=685.67µs
Jan 26 09:43:52 compute-0 podman[100232]: 2026-01-26 09:43:52.987053432 +0000 UTC m=+0.137349545 container died 3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095 (image=quay.io/ceph/haproxy:2.3, name=suspicious_darwin)
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.988612416Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.988702558Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=90.442µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.990595063Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.990654665Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=60.462µs
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.992429215Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.99470192Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.272835ms
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.996255074Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Jan 26 09:43:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:52.998368074Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.11248ms
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.000627539Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.000718941Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=91.652µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.003723417Z level=info msg="Executing migration" id="create quota table v1"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.005586499Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.869193ms
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.010747827Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.011724525Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=979.208µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.014622717Z level=info msg="Executing migration" id="Update quota table charset"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.014647418Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=25.421µs
Jan 26 09:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7544787af0a5cc64a30cc123549d5f927efd7cc956830ea8c52d8d5d78442165-merged.mount: Deactivated successfully.
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.017722995Z level=info msg="Executing migration" id="create plugin_setting table"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.018502608Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=780.183µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.021104751Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.021789011Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=683.57µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.024294733Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.026781833Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.48369ms
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.032737494Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.032794585Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=61.552µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.094429551Z level=info msg="Executing migration" id="create session table"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.095922454Z level=info msg="Migration successfully executed" id="create session table" duration=1.497463ms
Jan 26 09:43:53 compute-0 python3[100285]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:43:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.395375716Z level=info msg="Executing migration" id="Drop old table playlist table"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.395516119Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=144.234µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.397450534Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.397519806Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=70.312µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.399164153Z level=info msg="Executing migration" id="create playlist table v2"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.399873884Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=709.32µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.402172279Z level=info msg="Executing migration" id="create playlist item table v2"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.402819477Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=646.368µs
Jan 26 09:43:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 26 09:43:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.405873654Z level=info msg="Executing migration" id="Update playlist table charset"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.405894645Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=21.451µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.407711627Z level=info msg="Executing migration" id="Update playlist_item table charset"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.407791859Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=80.952µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.409886329Z level=info msg="Executing migration" id="Add playlist column created_at"
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.405343056s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.461746216s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.405274391s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.461746216s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408455849s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.465332031s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408474922s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.465423584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408411026s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.465332031s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408452988s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.465423584s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408426285s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.465652466s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408369064s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.465652466s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408591270s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.466293335s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408568382s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.466293335s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408446312s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.466445923s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408411980s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.466445923s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408352852s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.466796875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.408249855s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.466796875s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.407724380s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.466552734s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:53 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 72 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=72 pruub=13.407688141s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.466552734s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.414446518Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.559659ms
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:43:53.416Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004311785s
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.419930085Z level=info msg="Executing migration" id="Add playlist column updated_at"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.423094625Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.168341ms
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.425459642Z level=info msg="Executing migration" id="drop preferences table v2"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.425537334Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=82.702µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.42714553Z level=info msg="Executing migration" id="drop preferences table v3"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.427244714Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=99.414µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.429007044Z level=info msg="Executing migration" id="create preferences table v3"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.429824866Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=818.112µs
Jan 26 09:43:53 compute-0 ceph-mgr[74755]: [dashboard INFO request] [192.168.122.100:38216] [GET] [200] [0.158s] [6.3K] [90d966c9-4033-48b1-9d3b-5df76cbbcecf] /
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.536111375Z level=info msg="Executing migration" id="Update preferences table charset"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.536175987Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=70.801µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.597116913Z level=info msg="Executing migration" id="Add column team_id in preferences"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.600547951Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.432907ms
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.60298387Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.603245218Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=262.247µs
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.605678827Z level=info msg="Executing migration" id="Add column week_start in preferences"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.60962529Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.946163ms
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.611916454Z level=info msg="Executing migration" id="Add column preferences.json_data"
Jan 26 09:43:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:53.615448006Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.531401ms
Jan 26 09:43:53 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 26 09:43:53 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:54 compute-0 python3[100309]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:54 compute-0 ceph-mgr[74755]: [dashboard INFO request] [192.168.122.100:38228] [GET] [200] [0.003s] [6.3K] [0f6eab17-ee49-4d3d-b31e-42fd043cba12] /
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:54 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 26 09:43:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 0 keys/s, 2 objects/s recovering
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.74256274Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.742757745Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=205.746µs
Jan 26 09:43:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 26 09:43:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 26 09:43:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.746578044Z level=info msg="Executing migration" id="Add preferences index org_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.748936151Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=2.357647ms
Jan 26 09:43:54 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.753661076Z level=info msg="Executing migration" id="Add preferences index user_id"
Jan 26 09:43:54 compute-0 podman[100232]: 2026-01-26 09:43:54.759002728 +0000 UTC m=+1.909298841 container remove 3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095 (image=quay.io/ceph/haproxy:2.3, name=suspicious_darwin)
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.761601333Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=7.933216ms
Jan 26 09:43:54 compute-0 systemd[1]: libpod-conmon-3426d08d8e7f008a7c163216c3a9de8b0fa79ca09911b6bccfc879b8b086f095.scope: Deactivated successfully.
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.766214994Z level=info msg="Executing migration" id="create alert table v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.767818689Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.627616ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.771554626Z level=info msg="Executing migration" id="add index alert org_id & id "
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.77345521Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.903354ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.776848086Z level=info msg="Executing migration" id="add index alert state"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.777955908Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.108022ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.780939963Z level=info msg="Executing migration" id="add index alert dashboard_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.784157425Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=3.217182ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.786653175Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.787232323Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=578.327µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.789089185Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.78994877Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=859.255µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.792141022Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.792965156Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=823.694µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.79487444Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.801838468Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=6.957718ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.803483745Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.804049592Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=565.967µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.806180492Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.806851441Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=671.119µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.80890882Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.809159767Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=251.257µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.81063692Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.811091492Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=454.922µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.812705028Z level=info msg="Executing migration" id="create alert_notification table v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.813313695Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=608.767µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.815033005Z level=info msg="Executing migration" id="Add column is_default"
Jan 26 09:43:54 compute-0 systemd[1]: Reloading.
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.818324218Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.289643ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.820055917Z level=info msg="Executing migration" id="Add column frequency"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.825271536Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.217039ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.827762537Z level=info msg="Executing migration" id="Add column send_reminder"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.830784823Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.024996ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.832479331Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.834981833Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.502212ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.836524157Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.837255938Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=733.741µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.839612564Z level=info msg="Executing migration" id="Update alert table charset"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.839638305Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=26.801µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.841442097Z level=info msg="Executing migration" id="Update alert_notification table charset"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.841467048Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=25.671µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.84366955Z level=info msg="Executing migration" id="create notification_journal table v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.844658859Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=956.228µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.84751889Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.849593299Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=2.074079ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.852424309Z level=info msg="Executing migration" id="drop alert_notification_journal"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.853111649Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=687.66µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.85488495Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.85559613Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=712.76µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.857239827Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.857910826Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=670.739µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.859532872Z level=info msg="Executing migration" id="Add for to alert table"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.862397494Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.863812ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.864313948Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.867060577Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.744189ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.869552888Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.869716143Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=163.274µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.871506343Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.87212688Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=619.527µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.873883751Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.874596851Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=712.86µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.876171816Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.881078436Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.89764ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.883238687Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.883306419Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=69.192µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.885135352Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.886010246Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=875.784µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.887661684Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.888494417Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=833.183µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.891685138Z level=info msg="Executing migration" id="Drop old annotation table v4"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.891826933Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=140.144µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.893673065Z level=info msg="Executing migration" id="create annotation table v5"
Jan 26 09:43:54 compute-0 systemd-sysv-generator[100344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.895569749Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.895284ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.897941486Z level=info msg="Executing migration" id="add index annotation 0 v3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.899916242Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.974466ms
Jan 26 09:43:54 compute-0 systemd-rc-local-generator[100341]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.902425054Z level=info msg="Executing migration" id="add index annotation 1 v3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.903138675Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=713.971µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.905039599Z level=info msg="Executing migration" id="add index annotation 2 v3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.905868723Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=828.644µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.908079875Z level=info msg="Executing migration" id="add index annotation 3 v3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.908856238Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=774.983µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.911017119Z level=info msg="Executing migration" id="add index annotation 4 v3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.913100119Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.082979ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.915571548Z level=info msg="Executing migration" id="Update annotation table charset"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.91559377Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.922µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.917259667Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.920977642Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.710315ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.923300389Z level=info msg="Executing migration" id="Drop category_id index"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.924099382Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=798.383µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.92580834Z level=info msg="Executing migration" id="Add column tags to annotation table"
Jan 26 09:43:54 compute-0 ceph-mon[74456]: 12.15 scrub starts
Jan 26 09:43:54 compute-0 ceph-mon[74456]: 12.15 scrub ok
Jan 26 09:43:54 compute-0 ceph-mon[74456]: 8.6 scrub starts
Jan 26 09:43:54 compute-0 ceph-mon[74456]: 8.6 scrub ok
Jan 26 09:43:54 compute-0 ceph-mon[74456]: 11.11 scrub starts
Jan 26 09:43:54 compute-0 ceph-mon[74456]: 11.11 scrub ok
Jan 26 09:43:54 compute-0 ceph-mon[74456]: pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:43:54 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 26 09:43:54 compute-0 ceph-mon[74456]: osdmap e72: 3 total, 3 up, 3 in
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.929237638Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.426508ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.931518303Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.932101169Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=582.896µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.933939542Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.934770335Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=830.333µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.936991199Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.938320857Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.328778ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.940308923Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.948587969Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.275486ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.951236195Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.951935144Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=699.669µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.953664344Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.954498637Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=833.463µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.956730521Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.957021289Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=291.478µs
Jan 26 09:43:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.960711525Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.961392854Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=686.389µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.964179643Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.964401069Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=221.826µs
Jan 26 09:43:54 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.966444958Z level=info msg="Executing migration" id="Add created time to annotation table"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.969618068Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.17494ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.971505132Z level=info msg="Executing migration" id="Add updated time to annotation table"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.974611401Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.105779ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.976641588Z level=info msg="Executing migration" id="Add index for created in annotation table"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.97739889Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=756.672µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.979181561Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.979929052Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=747.731µs
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:54 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 73 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.983953556Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.984867763Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=1.03951ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.987453327Z level=info msg="Executing migration" id="Add epoch_end column"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.990641847Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.142459ms
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.992376136Z level=info msg="Executing migration" id="Add index for epoch_end"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.993144789Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=768.173µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.995157256Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.99532367Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=166.504µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.997076391Z level=info msg="Executing migration" id="Move region to single row"
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.997498443Z level=info msg="Migration successfully executed" id="Move region to single row" duration=420.732µs
Jan 26 09:43:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:54.999459838Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.000295102Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=835.204µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.00197353Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.002735392Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=761.842µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.00479227Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.005596414Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=803.914µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.007278591Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.008037253Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=755.052µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.009954628Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.010793801Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=839.563µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.012651965Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.013506808Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=854.053µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.015332821Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.015415423Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=83.172µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.018028318Z level=info msg="Executing migration" id="create test_data table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.018939253Z level=info msg="Migration successfully executed" id="create test_data table" duration=910.985µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.022456494Z level=info msg="Executing migration" id="create dashboard_version table v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.02339022Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=934.716µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.025655375Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.026469208Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=813.323µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.028616989Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.029427403Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=810.103µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.031912524Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.032101299Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=187.266µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.033869139Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.034200028Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=322.589µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.035895886Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.035950788Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=54.852µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.037788381Z level=info msg="Executing migration" id="create team table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.039084558Z level=info msg="Migration successfully executed" id="create team table" duration=1.295407ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.041923369Z level=info msg="Executing migration" id="add index team.org_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.042860775Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=937.616µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.045295674Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.046044156Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=747.942µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.048065223Z level=info msg="Executing migration" id="Add column uid in team"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.051187892Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.119029ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.052772527Z level=info msg="Executing migration" id="Update uid column values in team"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.052921562Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=149.185µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.054488307Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.055289759Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=799.602µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.057441351Z level=info msg="Executing migration" id="create team member table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.058100949Z level=info msg="Migration successfully executed" id="create team member table" duration=659.528µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.060210029Z level=info msg="Executing migration" id="add index team_member.org_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.06095391Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=744.311µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.063001689Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.06376784Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=765.861µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.065906152Z level=info msg="Executing migration" id="add index team_member.team_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.066620062Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=713.829µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.068781664Z level=info msg="Executing migration" id="Add column email to team table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.07427284Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.490767ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.076444051Z level=info msg="Executing migration" id="Add column external to team_member table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.081437584Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.991003ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.084058619Z level=info msg="Executing migration" id="Add column permission to team_member table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.088776753Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.720864ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.090857793Z level=info msg="Executing migration" id="create dashboard acl table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.091955203Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.09625ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.094482665Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.095491174Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.008329ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.097529302Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.098614993Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.085571ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.101173526Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.102304809Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.131113ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.104391968Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.105361236Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=969.267µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.108924888Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.110174492Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.251795ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.112752686Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.113738084Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=985.278µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.115977838Z level=info msg="Executing migration" id="add index dashboard_permission"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.116995357Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.017019ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.11919931Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.119680413Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=481.623µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.121598659Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.121838695Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=242.207µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.123568444Z level=info msg="Executing migration" id="create tag table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.124439629Z level=info msg="Migration successfully executed" id="create tag table" duration=871.155µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.126739604Z level=info msg="Executing migration" id="add index tag.key_value"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.127684082Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=944.448µs
Jan 26 09:43:55 compute-0 systemd[1]: Reloading.
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.129999198Z level=info msg="Executing migration" id="create login attempt table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.13079233Z level=info msg="Migration successfully executed" id="create login attempt table" duration=795.972µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.133004463Z level=info msg="Executing migration" id="add index login_attempt.username"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.13396358Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=958.287µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.13710069Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.138205221Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.104301ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.140170017Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.154725752Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.554915ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.156722929Z level=info msg="Executing migration" id="create login_attempt v2"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.157732627Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.011138ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.1602933Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.161524866Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.229995ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.164107479Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.164472369Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=365.21µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.166429246Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.167186217Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=756.711µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.169316258Z level=info msg="Executing migration" id="create user auth table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.17009733Z level=info msg="Migration successfully executed" id="create user auth table" duration=780.522µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.17218675Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.172994452Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=810.302µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.175272307Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.175333349Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=61.682µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.177762219Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.181594067Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.830979ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.184506791Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.189020909Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.514008ms
Jan 26 09:43:55 compute-0 systemd-rc-local-generator[100382]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.194396703Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.198332734Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.937282ms
Jan 26 09:43:55 compute-0 systemd-sysv-generator[100385]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.2027391Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.207911797Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.165706ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.210757748Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.211737486Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=980.338µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.214304029Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.218059287Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.754928ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.220769413Z level=info msg="Executing migration" id="create server_lock table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.221507995Z level=info msg="Migration successfully executed" id="create server_lock table" duration=738.601µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.224928452Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.225713194Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=787.303µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.228219636Z level=info msg="Executing migration" id="create user auth token table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.228955936Z level=info msg="Migration successfully executed" id="create user auth token table" duration=737.26µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.231552161Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.232678993Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.127113ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.23538919Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.236522113Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.133453ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.240136945Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.241092463Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=955.657µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.24346242Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.247765793Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.300993ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.24980101Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.250674846Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=873.656µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.253059044Z level=info msg="Executing migration" id="create cache_data table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.253829345Z level=info msg="Migration successfully executed" id="create cache_data table" duration=769.681µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.255992077Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.25680316Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=810.883µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.259183488Z level=info msg="Executing migration" id="create short_url table v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.260056283Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=872.605µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.263223023Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.263993665Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=770.081µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.266379623Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.266431544Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=48.281µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.268158573Z level=info msg="Executing migration" id="delete alert_definition table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.268236295Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=74.642µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.270099909Z level=info msg="Executing migration" id="recreate alert_definition table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.27084433Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=744.021µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.272964401Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.273763813Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=798.602µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.276315496Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.277174441Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=859.095µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.27962012Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.279669342Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=49.952µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.28135051Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.282375368Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.025968ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.283987145Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.284821028Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=833.683µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.287053172Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.287952637Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=896.475µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.289938524Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.290798448Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=859.614µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.292616241Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.29819473Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.569368ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.301720599Z level=info msg="Executing migration" id="drop alert_definition table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.302994476Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.275847ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.305032584Z level=info msg="Executing migration" id="delete alert_definition_version table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.305137657Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=105.883µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.30699672Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.307934637Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=938.047µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.309841841Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.310786428Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=944.537µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.312441965Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.313211487Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=769.752µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.315096731Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.315147862Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=51.161µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.317323234Z level=info msg="Executing migration" id="drop alert_definition_version table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.318343013Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.019519ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.320420993Z level=info msg="Executing migration" id="create alert_instance table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.321253106Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=831.693µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.323372527Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.32420729Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=834.783µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.326104905Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.326878976Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=773.781µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.329386778Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.333828054Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.440746ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.336027527Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.337131609Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.102502ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.33965555Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.341186994Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.531863ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.343336735Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.366989699Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.647874ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.369697407Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.392012092Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.305086ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.394151513Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.395493151Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.342778ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.397691974Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.398900129Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.210445ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.401553364Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.406021941Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.467447ms
Jan 26 09:43:55 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.ovxbdp for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.408027978Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.412341811Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.305243ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.41438379Z level=info msg="Executing migration" id="create alert_rule table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.415599294Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.216124ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.418487597Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.419697321Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.211924ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.423404226Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.424465257Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.06087ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.426813784Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.427896304Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.08268ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.430342814Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.430420396Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=79.992µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.432408113Z level=info msg="Executing migration" id="add column for to alert_rule"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.437636172Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.224708ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.439841105Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.444704183Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.860588ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.447125352Z level=info msg="Executing migration" id="add column labels to alert_rule"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.452571097Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.446535ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.454694458Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.455665105Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=972.237µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.457704844Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.458754313Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.052739ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.460934756Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.46565953Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.720984ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.467981026Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.472895697Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.909871ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.475024467Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.476099358Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.076341ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.47899282Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.4835626Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.56695ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.48564672Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.489954633Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.305833ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.491664711Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.491730883Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=66.912µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.493362489Z level=info msg="Executing migration" id="create alert_rule_version table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.494663747Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.300857ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.49688556Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.497829666Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=943.456µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.500456581Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.501476911Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.0207ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.503617691Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.503674784Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=56.303µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.505500396Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.511069124Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.564219ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.513219315Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.518188907Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.968132ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.520120952Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.524893168Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.770766ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.526813703Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.531344741Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.529968ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.533157764Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.539048351Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.890197ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.541038888Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.541100899Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=62.911µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.543044325Z level=info msg="Executing migration" id=create_alert_configuration_table
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.543874018Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=829.543µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.546326629Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.552151624Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=5.824895ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.554027498Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.55408897Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=62.363µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.555812629Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.560537913Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.720204ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.562780068Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.563621971Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=846.044µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.566111552Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.570686082Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.57584ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.572591977Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.573378819Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=787.022µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.575376866Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.576101987Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=724.871µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.577976251Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.582465808Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.487447ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.583951671Z level=info msg="Executing migration" id="create provenance_type table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.584708592Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=756.661µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.587049649Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.587892593Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=842.614µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.590172108Z level=info msg="Executing migration" id="create alert_image table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.590882798Z level=info msg="Migration successfully executed" id="create alert_image table" duration=710.621µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.593815462Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.594663416Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=849.104µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.596977682Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.597029003Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=52.481µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.598731861Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.600055879Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.323598ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.602618682Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.60358961Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=970.978µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.605992499Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.606684438Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.608909022Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.60955339Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=643.828µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.611212227Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.61237507Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.162783ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.61446522Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.621222082Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.755862ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.623318222Z level=info msg="Executing migration" id="create library_element table v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.624262269Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=943.927µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.626820351Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.627682906Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=861.995µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.630003523Z level=info msg="Executing migration" id="create library_element_connection table v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.630697542Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=690.369µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.632654438Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.633488512Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=830.764µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.635540311Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.636296302Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=755.451µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.638209227Z level=info msg="Executing migration" id="increase max description length to 2048"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.638233797Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=24.96µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.639950606Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.640005298Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=53.492µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.641552222Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.641817639Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=265.257µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.64360747Z level=info msg="Executing migration" id="create data_keys table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.644369942Z level=info msg="Migration successfully executed" id="create data_keys table" duration=762.992µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.646461921Z level=info msg="Executing migration" id="create secrets table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.64710117Z level=info msg="Migration successfully executed" id="create secrets table" duration=639.059µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.649198759Z level=info msg="Executing migration" id="rename data_keys name column to id"
Jan 26 09:43:55 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 26 09:43:55 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.676905999Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=27.70801ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.678723221Z level=info msg="Executing migration" id="add name column into data_keys"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.683802015Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.078664ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.685623877Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.685736511Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=113.054µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.687590393Z level=info msg="Executing migration" id="rename data_keys name column to label"
Jan 26 09:43:55 compute-0 podman[100438]: 2026-01-26 09:43:55.703340182 +0000 UTC m=+0.058742195 container create 798391baa91cd71a8b0d129af24404c8c91e9fa39db544501b8f4327097216f1 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-rgw-default-compute-0-ovxbdp)
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.713202103Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=25.60582ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.714897411Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.741155679Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=26.260318ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.743729963Z level=info msg="Executing migration" id="create kv_store table v1"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.744450553Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=720.79µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.747051287Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.747873871Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=821.104µs
Jan 26 09:43:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ab941a35f9336e1145bceffaa750ea5898d366d89582ce981d99063c9fc331/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.749935429Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.750105615Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=170.276µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.752085251Z level=info msg="Executing migration" id="create permission table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.752829552Z level=info msg="Migration successfully executed" id="create permission table" duration=742.46µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.754965413Z level=info msg="Executing migration" id="add unique index permission.role_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.755769546Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=804.803µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.758935806Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.759756979Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=821.453µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.761960962Z level=info msg="Executing migration" id="create role table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.762689774Z level=info msg="Migration successfully executed" id="create role table" duration=728.882µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.764994809Z level=info msg="Executing migration" id="add column display_name"
Jan 26 09:43:55 compute-0 podman[100438]: 2026-01-26 09:43:55.766021718 +0000 UTC m=+0.121423761 container init 798391baa91cd71a8b0d129af24404c8c91e9fa39db544501b8f4327097216f1 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-rgw-default-compute-0-ovxbdp)
Jan 26 09:43:55 compute-0 podman[100438]: 2026-01-26 09:43:55.678338079 +0000 UTC m=+0.033740142 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.77029332Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.298051ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.77207332Z level=info msg="Executing migration" id="add column group_name"
Jan 26 09:43:55 compute-0 podman[100438]: 2026-01-26 09:43:55.772505193 +0000 UTC m=+0.127907206 container start 798391baa91cd71a8b0d129af24404c8c91e9fa39db544501b8f4327097216f1 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-rgw-default-compute-0-ovxbdp)
Jan 26 09:43:55 compute-0 bash[100438]: 798391baa91cd71a8b0d129af24404c8c91e9fa39db544501b8f4327097216f1
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.777147885Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.074235ms
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.778783812Z level=info msg="Executing migration" id="add index role.org_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.779544764Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=760.552µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.781764466Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.7825895Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=824.524µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.784756811Z level=info msg="Executing migration" id="add index role_org_id_uid"
Jan 26 09:43:55 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.ovxbdp for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.785577545Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=816.914µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-rgw-default-compute-0-ovxbdp[100453]: [NOTICE] 025/094355 (2) : New worker #1 (4) forked
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.787908282Z level=info msg="Executing migration" id="create team role table"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.788574931Z level=info msg="Migration successfully executed" id="create team role table" duration=668.318µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.792489152Z level=info msg="Executing migration" id="add index team_role.org_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.793373227Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=882.825µs
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.797735502Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Jan 26 09:43:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.798624557Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=888.465µs
Jan 26 09:43:55 compute-0 sudo[100148]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.921314093Z level=info msg="Executing migration" id="add index team_role.team_id"
Jan 26 09:43:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:55.923929757Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.618555ms
Jan 26 09:43:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f00016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.053835988Z level=info msg="Executing migration" id="create user role table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.055422525Z level=info msg="Migration successfully executed" id="create user role table" duration=1.591136ms
Jan 26 09:43:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.26344369Z level=info msg="Executing migration" id="add index user_role.org_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.264655255Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.215935ms
Jan 26 09:43:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.322349048Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.324748367Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=2.402899ms
Jan 26 09:43:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 26 09:43:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.327966918Z level=info msg="Executing migration" id="add index user_role.user_id"
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 10.0 scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 10.0 scrub ok
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 8.f scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 8.f scrub ok
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 8.7 scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 8.7 scrub ok
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 10.e scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 10.e scrub ok
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 12.7 scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 12.7 scrub ok
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 11.9 scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: pgmap v58: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 0 keys/s, 2 objects/s recovering
Jan 26 09:43:56 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 11.9 scrub ok
Jan 26 09:43:56 compute-0 ceph-mon[74456]: osdmap e73: 3 total, 3 up, 3 in
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 10.c scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 10.c scrub ok
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 11.15 scrub starts
Jan 26 09:43:56 compute-0 ceph-mon[74456]: 11.15 scrub ok
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.330426789Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.44678ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.334202837Z level=info msg="Executing migration" id="create builtin role table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.336392628Z level=info msg="Migration successfully executed" id="create builtin role table" duration=2.202522ms
Jan 26 09:43:56 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.342013039Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.344208812Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=2.198293ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.348168475Z level=info msg="Executing migration" id="add index builtin_role.name"
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.350533711Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.366287ms
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 74 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:43:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.557015479s ======
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.35500899Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Jan 26 09:43:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:43:55.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.557015479s
Jan 26 09:43:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.371926391Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=16.906402ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.374638899Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Jan 26 09:43:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.37645062Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.806601ms
Jan 26 09:43:56 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.yyinob on compute-2
Jan 26 09:43:56 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.yyinob on compute-2
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.37924905Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.38065035Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.423551ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.38345261Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.384544711Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.092112ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.386815435Z level=info msg="Executing migration" id="add unique index role.uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.38802948Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.221355ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.39013058Z level=info msg="Executing migration" id="create seed assignment table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.391088937Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=960.287µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.393508786Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.394880035Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.372249ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.397601693Z level=info msg="Executing migration" id="add column hidden to role table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.404376476Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=6.755942ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.406485205Z level=info msg="Executing migration" id="permission kind migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.412820356Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.326011ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.414940086Z level=info msg="Executing migration" id="permission attribute migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.421220056Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.272569ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.423994135Z level=info msg="Executing migration" id="permission identifier migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.43053383Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.533625ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.432686462Z level=info msg="Executing migration" id="add permission identifier index"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.433825605Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.141083ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.436117209Z level=info msg="Executing migration" id="add permission action scope role_id index"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.437090547Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=973.358µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.440151344Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.441300567Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.151383ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.444340104Z level=info msg="Executing migration" id="create query_history table v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.445561579Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.221045ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.447830944Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.448978437Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.147853ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.451584431Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.451638183Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=57.381µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.453528516Z level=info msg="Executing migration" id="rbac disabled migrator"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.453576327Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=48.401µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.455186833Z level=info msg="Executing migration" id="teams permissions migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.455543713Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=356.62µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.457458808Z level=info msg="Executing migration" id="dashboard permissions"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.458146807Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=688.049µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.46034209Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.460972588Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=631.338µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.46348086Z level=info msg="Executing migration" id="drop managed folder create actions"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.463676675Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=195.955µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.465470266Z level=info msg="Executing migration" id="alerting notification permissions"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.465989131Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=515.665µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.46769673Z level=info msg="Executing migration" id="create query_history_star table v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.468443921Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=746.971µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.470637863Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.471686043Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.04828ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.474095132Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.481366249Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.261907ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.48349931Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.483568392Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=71.322µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.485399054Z level=info msg="Executing migration" id="create correlation table v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.486778564Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.37812ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.488925154Z level=info msg="Executing migration" id="add index correlations.uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.48982116Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=896.306µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.492055863Z level=info msg="Executing migration" id="add index correlations.source_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.493050842Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=994.629µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.496386228Z level=info msg="Executing migration" id="add correlation config column"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.504823158Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.433111ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.507284607Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.508377229Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.092101ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.509997035Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.510848599Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=851.614µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.51263409Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.530594582Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=17.8968ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.532765544Z level=info msg="Executing migration" id="create correlation v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.533725851Z level=info msg="Migration successfully executed" id="create correlation v2" duration=959.867µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.535484941Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.536332695Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=847.914µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.538697613Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.539693501Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=995.457µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.541945075Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.54279949Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=854.005µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.544709064Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.544903529Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=194.655µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.546594207Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.547348199Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=753.762µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.54879499Z level=info msg="Executing migration" id="add provisioning column"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.554859703Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.066113ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.556416927Z level=info msg="Executing migration" id="create entity_events table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.557086077Z level=info msg="Migration successfully executed" id="create entity_events table" duration=668.63µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.558594319Z level=info msg="Executing migration" id="create dashboard public config v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.559393513Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=799.194µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.561330247Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.561650566Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.563379145Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.563691775Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.565391053Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.566057292Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=666.349µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.567555714Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.568373678Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=818.034µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.570465687Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.571388084Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=920.457µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.573428603Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.574331708Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=902.855µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.576427828Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.577345653Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=916.355µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.578992721Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.579889666Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=893.865µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.581551673Z level=info msg="Executing migration" id="Drop public config table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.582365917Z level=info msg="Migration successfully executed" id="Drop public config table" duration=813.914µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.58424317Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.585346652Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.124733ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.587098231Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.588410039Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.311358ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.590057905Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.591290511Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.232376ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.593020911Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.593968277Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=947.236µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.596399547Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.617889909Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.482331ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.620446082Z level=info msg="Executing migration" id="add annotations_enabled column"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.627336268Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.885137ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.629518291Z level=info msg="Executing migration" id="add time_selection_enabled column"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.635699866Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.180695ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.637437456Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.637616461Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=181.566µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.639076102Z level=info msg="Executing migration" id="add share column"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.645348541Z level=info msg="Migration successfully executed" id="add share column" duration=6.268649ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.647276666Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.647461611Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=184.905µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.650885139Z level=info msg="Executing migration" id="create file table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.651798505Z level=info msg="Migration successfully executed" id="create file table" duration=914.676µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.65443503Z level=info msg="Executing migration" id="file table idx: path natural pk"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.655320165Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=885.195µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.65726518Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.658394303Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.129683ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.662280214Z level=info msg="Executing migration" id="create file_meta table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.663221551Z level=info msg="Migration successfully executed" id="create file_meta table" duration=944.608µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.665508755Z level=info msg="Executing migration" id="file table idx: path key"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.666456493Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=945.357µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.669550231Z level=info msg="Executing migration" id="set path collation in file table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.669679214Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=135.004µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.671794615Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.671846036Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=51.501µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.67339287Z level=info msg="Executing migration" id="managed permissions migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.673914485Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=521.935µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.675399047Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.675585593Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=186.646µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.677346623Z level=info msg="Executing migration" id="RBAC action name migrator"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.678684391Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.340648ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.680362449Z level=info msg="Executing migration" id="Add UID column to playlist"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.68706311Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.698571ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.688967764Z level=info msg="Executing migration" id="Update uid column values in playlist"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.689092277Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=124.783µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.690843877Z level=info msg="Executing migration" id="Add index for uid in playlist"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.691819235Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=977.028µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.695137839Z level=info msg="Executing migration" id="update group index for alert rules"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.695512261Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=374.902µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.697109846Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.697300921Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=188.375µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.698917797Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.699278627Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=360.61µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.700871814Z level=info msg="Executing migration" id="add action column to seed_assignment"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.707155732Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.284178ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.709081257Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.71514716Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.063483ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.717340152Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.718211587Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=871.795µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.720333807Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Jan 26 09:43:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 0 keys/s, 2 objects/s recovering
Jan 26 09:43:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 26 09:43:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.794099939Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=73.758442ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.796300342Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.797392084Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.092182ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.799160494Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.800037298Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=876.774µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.802360855Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.823882947Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.510272ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.826734269Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.833420439Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.68602ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.835116637Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.835358875Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=242.618µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.837063924Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.837206628Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=142.803µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.838950787Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.839114682Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=163.664µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.840811681Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.840976295Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=164.834µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.842578591Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.842744855Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=166.174µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.844179536Z level=info msg="Executing migration" id="create folder table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.844969518Z level=info msg="Migration successfully executed" id="create folder table" duration=790.102µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.84675063Z level=info msg="Executing migration" id="Add index for parent_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.847731227Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=980.047µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.84992248Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.850781315Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=858.635µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.85274521Z level=info msg="Executing migration" id="Update folder title length"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.852765681Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.011µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.854395867Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.855303853Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=907.896µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.857342261Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.858267177Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=924.576µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.859691928Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.860732518Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.04068ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.862735805Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.863082084Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=345.869µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.864491854Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.864694901Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=202.727µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.866267436Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.86712777Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=860.294µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.868734875Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.86960426Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=869.135µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.871230637Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.87202332Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=792.463µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.873951174Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.874775588Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=824.264µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.876401594Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.877169686Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=768.462µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.878750792Z level=info msg="Executing migration" id="create anon_device table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.879456701Z level=info msg="Migration successfully executed" id="create anon_device table" duration=705.83µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.881023826Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.882036975Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.013219ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.884401932Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.885356859Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=954.757µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.887527531Z level=info msg="Executing migration" id="create signing_key table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.888480559Z level=info msg="Migration successfully executed" id="create signing_key table" duration=953.368µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.89063418Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.891484684Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=858.725µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.893329456Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.894352616Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.023ms
Jan 26 09:43:56 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 25 completed events
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.895930331Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.896214359Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=282.797µs
Jan 26 09:43:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.897928487Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.90433834Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.408133ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.905980917Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.906676937Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=696.98µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.908336284Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.909345842Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.008568ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.911533805Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Jan 26 09:43:56 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.912439551Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=905.286µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.913883452Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.914925781Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.042349ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.916780965Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.91770518Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=924.545µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.919793461Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.920869782Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.0761ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.922529409Z level=info msg="Executing migration" id="create sso_setting table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.923406194Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=875.985µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.926029038Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.926895683Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=869.115µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.9285501Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.928779086Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=229.636µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.931298358Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.93135997Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=62.212µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.933036017Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.939372678Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.333961ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.941088777Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.947564732Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.472005ms
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.949666131Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.949956959Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=286.468µs
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=migrator t=2026-01-26T09:43:56.951920986Z level=info msg="migrations completed" performed=547 skipped=0 duration=4.540392502s
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore t=2026-01-26T09:43:56.953053168Z level=info msg="Created default organization"
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=secrets t=2026-01-26T09:43:56.954987323Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 26 09:43:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=plugin.store t=2026-01-26T09:43:56.973558922Z level=info msg="Loading plugins..."
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=local.finder t=2026-01-26T09:43:57.049244628Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=plugin.store t=2026-01-26T09:43:57.049278019Z level=info msg="Plugins loaded" count=55 duration=75.719707ms
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=query_data t=2026-01-26T09:43:57.05314259Z level=info msg="Query Service initialization"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=live.push_http t=2026-01-26T09:43:57.060388426Z level=info msg="Live Push Gateway initialization"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.migration t=2026-01-26T09:43:57.29780072Z level=info msg=Starting
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.migration t=2026-01-26T09:43:57.298919722Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.migration orgID=1 t=2026-01-26T09:43:57.299815617Z level=info msg="Migrating alerts for organisation"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.migration orgID=1 t=2026-01-26T09:43:57.30093903Z level=info msg="Alerts found to migrate" alerts=0
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.migration t=2026-01-26T09:43:57.30411522Z level=info msg="Completed alerting migration"
Jan 26 09:43:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:43:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 26 09:43:57 compute-0 ceph-mon[74456]: 10.3 scrub starts
Jan 26 09:43:57 compute-0 ceph-mon[74456]: 10.3 scrub ok
Jan 26 09:43:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:57 compute-0 ceph-mon[74456]: 10.a scrub starts
Jan 26 09:43:57 compute-0 ceph-mon[74456]: 10.a scrub ok
Jan 26 09:43:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 26 09:43:57 compute-0 ceph-mon[74456]: osdmap e74: 3 total, 3 up, 3 in
Jan 26 09:43:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:57 compute-0 ceph-mon[74456]: Deploying daemon haproxy.rgw.default.compute-2.yyinob on compute-2
Jan 26 09:43:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 26 09:43:57 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.state.manager t=2026-01-26T09:43:57.854171613Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=infra.usagestats.collector t=2026-01-26T09:43:57.85970215Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=provisioning.datasources t=2026-01-26T09:43:57.862561282Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=provisioning.alerting t=2026-01-26T09:43:57.966108532Z level=info msg="starting to provision alerting"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=provisioning.alerting t=2026-01-26T09:43:57.966153093Z level=info msg="finished to provision alerting"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=grafanaStorageLogger t=2026-01-26T09:43:57.96674482Z level=info msg="Storage starting"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.state.manager t=2026-01-26T09:43:57.96674446Z level=info msg="Warming state cache for startup"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.state.manager t=2026-01-26T09:43:57.96709354Z level=info msg="State cache has been initialized" states=0 duration=348.31µs
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.multiorg.alertmanager t=2026-01-26T09:43:57.967168722Z level=info msg="Starting MultiOrg Alertmanager"
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ngalert.scheduler t=2026-01-26T09:43:57.967247624Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ticker t=2026-01-26T09:43:57.967335537Z level=info msg=starting first_tick=2026-01-26T09:44:00Z
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=http.server t=2026-01-26T09:43:57.972504384Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=http.server t=2026-01-26T09:43:57.972911006Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 26 09:43:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:57.978383042Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 26 09:43:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 26 09:43:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 26 09:43:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357891083s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.996475220s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.13( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357825279s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.996475220s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.5( v 65'1162 (0'0,65'1162] local-lis/les=63/65 n=8 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.823134422s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 65'1161 mlcod 65'1161 active pruub 224.461929321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.5( v 65'1162 (0'0,65'1162] local-lis/les=63/65 n=8 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.823059082s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 65'1161 mlcod 0'0 unknown NOTIFY pruub 224.461929321s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357543945s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.996490479s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.b( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357441902s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.996490479s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357369423s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.996582031s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.7( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357323647s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.996582031s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357194901s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.996643066s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.17( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.357150078s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.996643066s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.354858398s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.994735718s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.f( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.354807854s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.994735718s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.826235771s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.466339111s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.826175690s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.466339111s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.356010437s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.996490479s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.356190681s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.996688843s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.356152534s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.996688843s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.825913429s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.466415405s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.825754166s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.466415405s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.3( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=6 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.355957031s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.996490479s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.355288506s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 229.996429443s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=73/74 n=5 ec=63/48 lis/c=73/63 les/c/f=74/65/0 sis=75 pruub=14.355252266s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.996429443s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.825240135s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 224.466781616s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:57 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 75 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=75 pruub=8.825166702s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.466781616s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=plugins.update.checker t=2026-01-26T09:43:58.039968356Z level=info msg="Update check succeeded" duration=73.455473ms
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=grafana.update.checker t=2026-01-26T09:43:58.041083828Z level=info msg="Update check succeeded" duration=73.497714ms
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=provisioning.dashboard t=2026-01-26T09:43:58.042126198Z level=info msg="starting to provision dashboards"
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.065245716Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.075945491Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.089220199Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 26 09:43:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:43:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:43:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:43:58.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:43:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:43:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.18717461Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.217089262Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Jan 26 09:43:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.230564656Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 26 09:43:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.241266181Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Jan 26 09:43:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=sqlstore.transactions t=2026-01-26T09:43:58.33353778Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 26 09:43:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Jan 26 09:43:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:43:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:43:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:43:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:43:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:58 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:58 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:58 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:58 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.dhkprh on compute-0
Jan 26 09:43:58 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.dhkprh on compute-0
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=grafana-apiserver t=2026-01-26T09:43:58.420895219Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=grafana-apiserver t=2026-01-26T09:43:58.421608519Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 26 09:43:58 compute-0 sudo[100474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:43:58 compute-0 sudo[100474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:58 compute-0 sudo[100474]: pam_unix(sudo:session): session closed for user root
Jan 26 09:43:58 compute-0 sudo[100499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:43:58 compute-0 sudo[100499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:43:58 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Jan 26 09:43:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=provisioning.dashboard t=2026-01-26T09:43:58.677566991Z level=info msg="finished to provision dashboards"
Jan 26 09:43:58 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Jan 26 09:43:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 8 active+remapped, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 209 B/s, 6 objects/s recovering
Jan 26 09:43:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 26 09:43:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 26 09:43:58 compute-0 podman[100563]: 2026-01-26 09:43:58.913317709 +0000 UTC m=+0.048206845 container create c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e (image=quay.io/ceph/keepalived:2.2.4, name=recursing_lalande, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, version=2.2.4, com.redhat.component=keepalived-container)
Jan 26 09:43:58 compute-0 ceph-mon[74456]: pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 0 keys/s, 2 objects/s recovering
Jan 26 09:43:58 compute-0 ceph-mon[74456]: 10.9 scrub starts
Jan 26 09:43:58 compute-0 ceph-mon[74456]: 10.9 scrub ok
Jan 26 09:43:58 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 26 09:43:58 compute-0 ceph-mon[74456]: osdmap e75: 3 total, 3 up, 3 in
Jan 26 09:43:58 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:43:58 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:43:58 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:43:58 compute-0 ceph-mon[74456]: Deploying daemon keepalived.rgw.default.compute-0.dhkprh on compute-0
Jan 26 09:43:58 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 26 09:43:58 compute-0 systemd[1]: Started libpod-conmon-c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e.scope.
Jan 26 09:43:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:43:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 26 09:43:58 compute-0 podman[100563]: 2026-01-26 09:43:58.886762732 +0000 UTC m=+0.021651888 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 26 09:43:59 compute-0 podman[100563]: 2026-01-26 09:43:59.052664068 +0000 UTC m=+0.187553234 container init c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e (image=quay.io/ceph/keepalived:2.2.4, name=recursing_lalande, architecture=x86_64, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 26 09:43:59 compute-0 podman[100563]: 2026-01-26 09:43:59.060309417 +0000 UTC m=+0.195198553 container start c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e (image=quay.io/ceph/keepalived:2.2.4, name=recursing_lalande, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph)
Jan 26 09:43:59 compute-0 podman[100563]: 2026-01-26 09:43:59.063481307 +0000 UTC m=+0.198370543 container attach c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e (image=quay.io/ceph/keepalived:2.2.4, name=recursing_lalande, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, release=1793, io.buildah.version=1.28.2, description=keepalived for Ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20)
Jan 26 09:43:59 compute-0 systemd[1]: libpod-c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e.scope: Deactivated successfully.
Jan 26 09:43:59 compute-0 recursing_lalande[100580]: 0 0
Jan 26 09:43:59 compute-0 podman[100563]: 2026-01-26 09:43:59.067767129 +0000 UTC m=+0.202656265 container died c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e (image=quay.io/ceph/keepalived:2.2.4, name=recursing_lalande, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, architecture=x86_64, vcs-type=git, release=1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 09:43:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 26 09:43:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 26 09:43:59 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 26 09:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2501e2cbc790e9c4cd4017040c6ac26e6da5210a65ad3712188b97b21b181a03-merged.mount: Deactivated successfully.
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.5( v 65'1162 (0'0,65'1162] local-lis/les=63/65 n=8 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 65'1161 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.5( v 65'1162 (0'0,65'1162] local-lis/les=63/65 n=8 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 65'1161 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.709083557s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 232.465591431s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.709060669s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.465591431s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.708930016s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 232.465576172s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.708905220s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.465576172s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.709000587s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 232.466003418s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.708888054s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.466003418s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.708959579s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 232.466766357s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:43:59 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 76 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76 pruub=15.708938599s) [1] r=-1 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.466766357s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:43:59 compute-0 podman[100563]: 2026-01-26 09:43:59.128080127 +0000 UTC m=+0.262969263 container remove c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e (image=quay.io/ceph/keepalived:2.2.4, name=recursing_lalande, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Jan 26 09:43:59 compute-0 systemd[1]: libpod-conmon-c8e763acc8dcc6436c34e8dbb7c6410a280728636c01b0970e5ffb5a25cb832e.scope: Deactivated successfully.
Jan 26 09:43:59 compute-0 systemd[1]: Reloading.
Jan 26 09:43:59 compute-0 systemd-rc-local-generator[100629]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:59 compute-0 systemd-sysv-generator[100632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:59 compute-0 systemd[1]: Reloading.
Jan 26 09:43:59 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 26 09:43:59 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 26 09:43:59 compute-0 systemd-rc-local-generator[100670]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:43:59 compute-0 systemd-sysv-generator[100675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:43:59 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.dhkprh for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:43:59 compute-0 ceph-mon[74456]: 12.f scrub starts
Jan 26 09:43:59 compute-0 ceph-mon[74456]: 12.f scrub ok
Jan 26 09:43:59 compute-0 ceph-mon[74456]: 9.a deep-scrub starts
Jan 26 09:43:59 compute-0 ceph-mon[74456]: 9.a deep-scrub ok
Jan 26 09:43:59 compute-0 ceph-mon[74456]: pgmap v63: 353 pgs: 8 active+remapped, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 209 B/s, 6 objects/s recovering
Jan 26 09:43:59 compute-0 ceph-mon[74456]: 10.d deep-scrub starts
Jan 26 09:43:59 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 26 09:43:59 compute-0 ceph-mon[74456]: osdmap e76: 3 total, 3 up, 3 in
Jan 26 09:43:59 compute-0 ceph-mon[74456]: 9.8 scrub starts
Jan 26 09:43:59 compute-0 ceph-mon[74456]: 9.8 scrub ok
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:00 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:00 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 26 09:44:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:00.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:00 compute-0 podman[100725]: 2026-01-26 09:44:00.133498133 +0000 UTC m=+0.042620705 container create 46c10b35b2f7e59ccb45c58d1aeeac0f25f8e42a679e71807b1675e3bd0a1248 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh, build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, description=keepalived for Ceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:00 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 26 09:44:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1081344159d9150fef7e48d957d18db600f5137bdb38ec656a686f5b3b65fd31/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:00 compute-0 podman[100725]: 2026-01-26 09:44:00.196773245 +0000 UTC m=+0.105895837 container init 46c10b35b2f7e59ccb45c58d1aeeac0f25f8e42a679e71807b1675e3bd0a1248 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, version=2.2.4, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 26 09:44:00 compute-0 podman[100725]: 2026-01-26 09:44:00.201503891 +0000 UTC m=+0.110626463 container start 46c10b35b2f7e59ccb45c58d1aeeac0f25f8e42a679e71807b1675e3bd0a1248 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-type=git, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.expose-services=)
Jan 26 09:44:00 compute-0 bash[100725]: 46c10b35b2f7e59ccb45c58d1aeeac0f25f8e42a679e71807b1675e3bd0a1248
Jan 26 09:44:00 compute-0 podman[100725]: 2026-01-26 09:44:00.115068558 +0000 UTC m=+0.024191150 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 26 09:44:00 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.dhkprh for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.5( v 65'1162 (0'0,65'1162] local-lis/les=76/77 n=8 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[63,76)/1 crt=65'1162 lcod 65'1161 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: Starting VRRP child process, pid=4
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: Startup complete
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:00 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 77 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:44:00 2026: (VI_0) Entering BACKUP STATE
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: (VI_0) Entering BACKUP STATE (init)
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:00 2026: VRRP_Script(check_backend) succeeded
Jan 26 09:44:00 compute-0 sudo[100499]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:00.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 26 09:44:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:44:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:44:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:44:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:44:00 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.djgvpg on compute-2
Jan 26 09:44:00 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.djgvpg on compute-2
Jan 26 09:44:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 8 active+remapped, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 253 B/s, 7 objects/s recovering
Jan 26 09:44:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 26 09:44:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 26 09:44:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj[98870]: Mon Jan 26 09:44:00 2026: (VI_0) Entering MASTER STATE
Jan 26 09:44:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 26 09:44:01 compute-0 ceph-mon[74456]: 10.d deep-scrub ok
Jan 26 09:44:01 compute-0 ceph-mon[74456]: 9.13 scrub starts
Jan 26 09:44:01 compute-0 ceph-mon[74456]: 9.13 scrub ok
Jan 26 09:44:01 compute-0 ceph-mon[74456]: 10.b scrub starts
Jan 26 09:44:01 compute-0 ceph-mon[74456]: 10.b scrub ok
Jan 26 09:44:01 compute-0 ceph-mon[74456]: osdmap e77: 3 total, 3 up, 3 in
Jan 26 09:44:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:01 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 26 09:44:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 26 09:44:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 26 09:44:01 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.5( v 77'1166 (0'0,77'1166] local-lis/les=76/77 n=8 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=78 pruub=14.700515747s) [2] async=[2] r=-1 lpr=78 pi=[63,78)/1 crt=65'1162 lcod 77'1165 mlcod 77'1165 active pruub 233.872024536s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.5( v 77'1166 (0'0,77'1166] local-lis/les=76/77 n=8 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=78 pruub=14.700400352s) [2] r=-1 lpr=78 pi=[63,78)/1 crt=65'1162 lcod 77'1165 mlcod 0'0 unknown NOTIFY pruub 233.872024536s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=6 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[63,76)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=5 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=78 pruub=14.708618164s) [2] async=[2] r=-1 lpr=78 pi=[63,78)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 233.881103516s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.1d( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=5 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=78 pruub=14.708536148s) [2] r=-1 lpr=78 pi=[63,78)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.881103516s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=5 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=78 pruub=14.698900223s) [2] async=[2] r=-1 lpr=78 pi=[63,78)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 233.872070312s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.15( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=5 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=78 pruub=14.698789597s) [2] r=-1 lpr=78 pi=[63,78)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.872070312s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:01 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 26 09:44:01 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:01 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 78 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=77) [1]/[0] async=[1] r=0 lpr=77 pi=[63,77)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:01 compute-0 ceph-mgr[74755]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Jan 26 09:44:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:02 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:02 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:02.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:02 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 09:44:02 compute-0 ceph-mon[74456]: Deploying daemon keepalived.rgw.default.compute-2.djgvpg on compute-2
Jan 26 09:44:02 compute-0 ceph-mon[74456]: pgmap v66: 353 pgs: 8 active+remapped, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 253 B/s, 7 objects/s recovering
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 12.d deep-scrub starts
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 12.d deep-scrub ok
Jan 26 09:44:02 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 26 09:44:02 compute-0 ceph-mon[74456]: osdmap e78: 3 total, 3 up, 3 in
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 9.1f scrub starts
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 9.4 scrub starts
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 9.1f scrub ok
Jan 26 09:44:02 compute-0 ceph-mon[74456]: 9.4 scrub ok
Jan 26 09:44:02 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 26 09:44:02 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 26 09:44:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 26 09:44:02 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=6 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.118217468s) [1] async=[1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 235.428085327s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=5 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.118059158s) [1] async=[1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 235.428115845s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.6( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=6 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.117990494s) [1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.428085327s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=6 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.117538452s) [1] async=[1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 235.428131104s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.e( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=6 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.117502213s) [1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.428131104s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.16( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=5 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.117522240s) [1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.428115845s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=6 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=79 pruub=13.570216179s) [2] async=[2] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 233.881134033s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.d( v 60'1159 (0'0,60'1159] local-lis/les=76/77 n=6 ec=63/48 lis/c=76/63 les/c/f=77/65/0 sis=79 pruub=13.570155144s) [2] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.881134033s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=5 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.113532066s) [1] async=[1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 235.425247192s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:02 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 79 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=77/78 n=5 ec=63/48 lis/c=77/63 les/c/f=78/65/0 sis=79 pruub=15.113487244s) [1] r=-1 lpr=79 pi=[63,79)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.425247192s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 8 active+remapped, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 26 09:44:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 26 09:44:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:44:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:44:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 26 09:44:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:03 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev c33f7d9c-a41b-4a4f-8a21-a13e4926bcdf (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 26 09:44:03 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event c33f7d9c-a41b-4a4f-8a21-a13e4926bcdf (Updating ingress.rgw.default deployment (+4 -> 4)) in 11 seconds
Jan 26 09:44:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 26 09:44:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:03 compute-0 ceph-mgr[74755]: [progress INFO root] update: starting ev 85403969-836a-42ea-87d3-841507bf2765 (Updating prometheus deployment (+1 -> 1))
Jan 26 09:44:03 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Jan 26 09:44:03 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Jan 26 09:44:03 compute-0 sudo[100750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:03 compute-0 sudo[100750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:03 compute-0 sudo[100750]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:03 compute-0 sudo[100775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:03 compute-0 sudo[100775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:03 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 26 09:44:03 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 26 09:44:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 26 09:44:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-rgw-default-compute-0-dhkprh[100739]: Mon Jan 26 09:44:03 2026: (VI_0) Entering MASTER STATE
Jan 26 09:44:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 26 09:44:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 26 09:44:04 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 80 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=7 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=80 pruub=10.800627708s) [2] r=-1 lpr=80 pi=[63,80)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 232.465713501s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:04 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 80 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=7 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=80 pruub=10.800596237s) [2] r=-1 lpr=80 pi=[63,80)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.465713501s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:04 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 80 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=80 pruub=10.800463676s) [2] r=-1 lpr=80 pi=[63,80)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 232.466796875s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:04 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 80 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=80 pruub=10.800319672s) [2] r=-1 lpr=80 pi=[63,80)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.466796875s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:04 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 26 09:44:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:04 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:04 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:04.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:04 compute-0 ceph-mon[74456]: 11.6 scrub starts
Jan 26 09:44:04 compute-0 ceph-mon[74456]: 11.6 scrub ok
Jan 26 09:44:04 compute-0 ceph-mon[74456]: 11.a scrub starts
Jan 26 09:44:04 compute-0 ceph-mon[74456]: 11.a scrub ok
Jan 26 09:44:04 compute-0 ceph-mon[74456]: osdmap e79: 3 total, 3 up, 3 in
Jan 26 09:44:04 compute-0 ceph-mon[74456]: pgmap v69: 353 pgs: 8 active+remapped, 345 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:04 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 26 09:44:04 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:04 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:04 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:04 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:04 compute-0 ceph-mon[74456]: Deploying daemon prometheus.compute-0 on compute-0
Jan 26 09:44:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:04 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:04.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:04 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 26 09:44:04 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 26 09:44:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 5 peering, 348 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 129 B/s, 7 objects/s recovering
Jan 26 09:44:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 26 09:44:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 26 09:44:05 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 26 09:44:05 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 81 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=7 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=81) [2]/[0] r=0 lpr=81 pi=[63,81)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:05 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 81 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=7 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=81) [2]/[0] r=0 lpr=81 pi=[63,81)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:05 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 81 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=81) [2]/[0] r=0 lpr=81 pi=[63,81)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:05 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 81 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=81) [2]/[0] r=0 lpr=81 pi=[63,81)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:05 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 26 09:44:05 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 26 09:44:05 compute-0 ceph-mon[74456]: 11.b scrub starts
Jan 26 09:44:05 compute-0 ceph-mon[74456]: 11.b scrub ok
Jan 26 09:44:05 compute-0 ceph-mon[74456]: 8.c scrub starts
Jan 26 09:44:05 compute-0 ceph-mon[74456]: 8.c scrub ok
Jan 26 09:44:05 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 26 09:44:05 compute-0 ceph-mon[74456]: osdmap e80: 3 total, 3 up, 3 in
Jan 26 09:44:05 compute-0 ceph-mon[74456]: 11.c scrub starts
Jan 26 09:44:05 compute-0 ceph-mon[74456]: 11.c scrub ok
Jan 26 09:44:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:06 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:06 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:06.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:06 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 26 09:44:06 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 26 09:44:06 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 26 09:44:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 5 peering, 348 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 113 B/s, 6 objects/s recovering
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:44:06 compute-0 ceph-mgr[74755]: [progress INFO root] Writing back 26 completed events
Jan 26 09:44:07 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 26 09:44:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 10.10 scrub starts
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 10.10 scrub ok
Jan 26 09:44:07 compute-0 ceph-mon[74456]: pgmap v71: 353 pgs: 5 peering, 348 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 129 B/s, 7 objects/s recovering
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 9.e scrub starts
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 9.e scrub ok
Jan 26 09:44:07 compute-0 ceph-mon[74456]: osdmap e81: 3 total, 3 up, 3 in
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 11.d scrub starts
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 11.d scrub ok
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 9.6 scrub starts
Jan 26 09:44:07 compute-0 ceph-mon[74456]: 9.6 scrub ok
Jan 26 09:44:07 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 82 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=81/82 n=7 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=81) [2]/[0] async=[2] r=0 lpr=81 pi=[63,81)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:07 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 82 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=81/82 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=81) [2]/[0] async=[2] r=0 lpr=81 pi=[63,81)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:07 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 26 09:44:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 26 09:44:07 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 26 09:44:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 26 09:44:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:08 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:08 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 83 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=81/82 n=5 ec=63/48 lis/c=81/63 les/c/f=82/65/0 sis=83 pruub=14.968602180s) [2] async=[2] r=-1 lpr=83 pi=[63,83)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 240.671417236s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 83 pg[9.18( v 60'1159 (0'0,60'1159] local-lis/les=81/82 n=5 ec=63/48 lis/c=81/63 les/c/f=82/65/0 sis=83 pruub=14.968538284s) [2] r=-1 lpr=83 pi=[63,83)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.671417236s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 83 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=81/82 n=7 ec=63/48 lis/c=81/63 les/c/f=82/65/0 sis=83 pruub=14.963144302s) [2] async=[2] r=-1 lpr=83 pi=[63,83)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 240.666046143s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 83 pg[9.8( v 60'1159 (0'0,60'1159] local-lis/les=81/82 n=7 ec=63/48 lis/c=81/63 les/c/f=82/65/0 sis=83 pruub=14.962957382s) [2] r=-1 lpr=83 pi=[63,83)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.666046143s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:08 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 12.17 scrub starts
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 12.17 scrub ok
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 8.e scrub starts
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 8.e scrub ok
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 10.1e scrub starts
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 10.1e scrub ok
Jan 26 09:44:08 compute-0 ceph-mon[74456]: pgmap v73: 353 pgs: 5 peering, 348 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 113 B/s, 6 objects/s recovering
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 9.16 scrub starts
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 9.16 scrub ok
Jan 26 09:44:08 compute-0 ceph-mon[74456]: osdmap e82: 3 total, 3 up, 3 in
Jan 26 09:44:08 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 8.0 scrub starts
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 8.0 scrub ok
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 12.5 scrub starts
Jan 26 09:44:08 compute-0 ceph-mon[74456]: 12.5 scrub ok
Jan 26 09:44:08 compute-0 ceph-mon[74456]: osdmap e83: 3 total, 3 up, 3 in
Jan 26 09:44:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:08 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:08 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 26 09:44:08 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 26 09:44:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 26 09:44:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 26 09:44:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 26 09:44:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 26 09:44:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 26 09:44:08 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 84 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=84 pruub=13.922858238s) [2] r=-1 lpr=84 pi=[63,84)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 240.465820312s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 84 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=84 pruub=13.922821999s) [2] r=-1 lpr=84 pi=[63,84)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.465820312s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 84 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=84 pruub=13.922943115s) [2] r=-1 lpr=84 pi=[63,84)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 240.466873169s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 84 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=84 pruub=13.922859192s) [2] r=-1 lpr=84 pi=[63,84)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.466873169s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.034624355 +0000 UTC m=+5.127671393 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.052364651 +0000 UTC m=+5.145411689 volume create 3c22434f343da6092061631032dbc2d453fedff2fb4eaf1b5fd17e871b8c33ad
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.063001961 +0000 UTC m=+5.156048999 container create 288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701 (image=quay.io/prometheus/prometheus:v2.51.0, name=practical_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 ceph-mon[74456]: 11.13 deep-scrub starts
Jan 26 09:44:09 compute-0 ceph-mon[74456]: 11.13 deep-scrub ok
Jan 26 09:44:09 compute-0 ceph-mon[74456]: 11.2 scrub starts
Jan 26 09:44:09 compute-0 ceph-mon[74456]: 11.2 scrub ok
Jan 26 09:44:09 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 26 09:44:09 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 26 09:44:09 compute-0 ceph-mon[74456]: osdmap e84: 3 total, 3 up, 3 in
Jan 26 09:44:09 compute-0 ceph-mon[74456]: 12.0 scrub starts
Jan 26 09:44:09 compute-0 ceph-mon[74456]: 12.0 scrub ok
Jan 26 09:44:09 compute-0 systemd[1]: Started libpod-conmon-288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701.scope.
Jan 26 09:44:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35a1500331551b07fcb60fed76f441809dc427ebf87aa41084b0b931fefbacc9/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.144109229 +0000 UTC m=+5.237156277 container init 288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701 (image=quay.io/prometheus/prometheus:v2.51.0, name=practical_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.151303178 +0000 UTC m=+5.244350226 container start 288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701 (image=quay.io/prometheus/prometheus:v2.51.0, name=practical_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 practical_greider[101099]: 65534 65534
Jan 26 09:44:09 compute-0 systemd[1]: libpod-288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701.scope: Deactivated successfully.
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.156041236 +0000 UTC m=+5.249088314 container attach 288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701 (image=quay.io/prometheus/prometheus:v2.51.0, name=practical_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.156932152 +0000 UTC m=+5.249979200 container died 288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701 (image=quay.io/prometheus/prometheus:v2.51.0, name=practical_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-35a1500331551b07fcb60fed76f441809dc427ebf87aa41084b0b931fefbacc9-merged.mount: Deactivated successfully.
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.19815458 +0000 UTC m=+5.291201618 container remove 288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701 (image=quay.io/prometheus/prometheus:v2.51.0, name=practical_greider, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 podman[100840]: 2026-01-26 09:44:09.201701144 +0000 UTC m=+5.294748202 volume remove 3c22434f343da6092061631032dbc2d453fedff2fb4eaf1b5fd17e871b8c33ad
Jan 26 09:44:09 compute-0 systemd[1]: libpod-conmon-288ecc94bfade6de91fbc305cf1540564acd6aafcb3ffccc036df819d6c12701.scope: Deactivated successfully.
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.271794512 +0000 UTC m=+0.040618852 volume create e1b76d047e635fa83c0ea9fada1f7cba80e0424356a91cebef88d282e4a97f53
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.280288949 +0000 UTC m=+0.049113289 container create 47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032 (image=quay.io/prometheus/prometheus:v2.51.0, name=naughty_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 systemd[1]: Started libpod-conmon-47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032.scope.
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.254943552 +0000 UTC m=+0.023767912 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 26 09:44:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e26bb51aeaa69b7854c7d3e6c20fdd8f0cffc733188382e13203d67f7c2fbd/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.406422276 +0000 UTC m=+0.175246636 container init 47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032 (image=quay.io/prometheus/prometheus:v2.51.0, name=naughty_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.415877971 +0000 UTC m=+0.184702311 container start 47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032 (image=quay.io/prometheus/prometheus:v2.51.0, name=naughty_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 naughty_wilbur[101132]: 65534 65534
Jan 26 09:44:09 compute-0 systemd[1]: libpod-47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032.scope: Deactivated successfully.
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.420373342 +0000 UTC m=+0.189197732 container attach 47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032 (image=quay.io/prometheus/prometheus:v2.51.0, name=naughty_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.420759653 +0000 UTC m=+0.189584033 container died 47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032 (image=quay.io/prometheus/prometheus:v2.51.0, name=naughty_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-52e26bb51aeaa69b7854c7d3e6c20fdd8f0cffc733188382e13203d67f7c2fbd-merged.mount: Deactivated successfully.
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.464974329 +0000 UTC m=+0.233798669 container remove 47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032 (image=quay.io/prometheus/prometheus:v2.51.0, name=naughty_wilbur, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:09 compute-0 podman[101116]: 2026-01-26 09:44:09.4687994 +0000 UTC m=+0.237623740 volume remove e1b76d047e635fa83c0ea9fada1f7cba80e0424356a91cebef88d282e4a97f53
Jan 26 09:44:09 compute-0 systemd[1]: libpod-conmon-47ed0451101efcaba406a940794d0f45167e5aa11681c0edc24da4dfc28f6032.scope: Deactivated successfully.
Jan 26 09:44:09 compute-0 systemd[1]: Reloading.
Jan 26 09:44:09 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 26 09:44:09 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 26 09:44:09 compute-0 systemd-sysv-generator[101179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:44:09 compute-0 systemd-rc-local-generator[101172]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:44:09 compute-0 systemd[1]: Reloading.
Jan 26 09:44:09 compute-0 systemd-rc-local-generator[101219]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:44:09 compute-0 systemd-sysv-generator[101222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:44:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:10 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:10 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:10.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:10 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 26 09:44:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 26 09:44:10 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 85 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=85) [2]/[0] r=0 lpr=85 pi=[63,85)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:10 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 85 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=85) [2]/[0] r=0 lpr=85 pi=[63,85)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:10 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 85 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=85) [2]/[0] r=0 lpr=85 pi=[63,85)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:10 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 85 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=85) [2]/[0] r=0 lpr=85 pi=[63,85)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:10 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:44:10 compute-0 ceph-mon[74456]: pgmap v76: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:10 compute-0 ceph-mon[74456]: 8.1f scrub starts
Jan 26 09:44:10 compute-0 ceph-mon[74456]: 8.1f scrub ok
Jan 26 09:44:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:10 compute-0 podman[101278]: 2026-01-26 09:44:10.483808962 +0000 UTC m=+0.044826054 container create 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:10 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 26 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848612b2c35efd52f35abd8f0c2fdff2bc1956759181e2b51901222ff1ee3785/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848612b2c35efd52f35abd8f0c2fdff2bc1956759181e2b51901222ff1ee3785/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:10 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 26 09:44:10 compute-0 podman[101278]: 2026-01-26 09:44:10.535056133 +0000 UTC m=+0.096073225 container init 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:10 compute-0 podman[101278]: 2026-01-26 09:44:10.542652344 +0000 UTC m=+0.103669436 container start 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:10 compute-0 bash[101278]: 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488
Jan 26 09:44:10 compute-0 podman[101278]: 2026-01-26 09:44:10.463939975 +0000 UTC m=+0.024957097 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 26 09:44:10 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.581Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.582Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.582Z caller=main.go:623 level=info host_details="(Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 x86_64 compute-0 (none))"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.582Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.582Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.585Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.586Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.588Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.588Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.591Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.591Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.41µs
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.591Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.591Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.591Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=37.062µs wal_replay_duration=276.698µs wbl_replay_duration=180ns total_replay_duration=340.79µs
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.593Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.593Z caller=main.go:1153 level=info msg="TSDB started"
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.593Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.619Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=25.338296ms db_storage=1.02µs remote_storage=1.6µs web_handler=720ns query_engine=1.72µs scrape=3.816001ms scrape_sd=1.079121ms notify=26.801µs notify_sd=217.786µs rules=19.663052ms tracing=12.88µs
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.619Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Jan 26 09:44:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0[101293]: ts=2026-01-26T09:44:10.619Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Jan 26 09:44:10 compute-0 sudo[100775]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 26 09:44:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:10 compute-0 ceph-mgr[74755]: [progress INFO root] complete: finished ev 85403969-836a-42ea-87d3-841507bf2765 (Updating prometheus deployment (+1 -> 1))
Jan 26 09:44:10 compute-0 ceph-mgr[74755]: [progress INFO root] Completed event 85403969-836a-42ea-87d3-841507bf2765 (Updating prometheus deployment (+1 -> 1)) in 8 seconds
Jan 26 09:44:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Jan 26 09:44:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 26 09:44:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 26 09:44:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 26 09:44:11 compute-0 sshd-session[101097]: Connection closed by 87.236.176.170 port 52771
Jan 26 09:44:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 26 09:44:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 26 09:44:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 26 09:44:11 compute-0 sshd-session[101310]: Connection closed by 87.236.176.170 port 38161 [preauth]
Jan 26 09:44:11 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 26 09:44:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 86 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=9 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=86 pruub=11.490267754s) [1] r=-1 lpr=86 pi=[63,86)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 240.465744019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 86 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=9 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=86 pruub=11.490221024s) [1] r=-1 lpr=86 pi=[63,86)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.465744019s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 86 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=86 pruub=11.489218712s) [1] r=-1 lpr=86 pi=[63,86)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 240.467102051s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 86 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=86 pruub=11.489179611s) [1] r=-1 lpr=86 pi=[63,86)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.467102051s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 8.1 scrub starts
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 8.1 scrub ok
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 12.1d scrub starts
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 12.1d scrub ok
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 10.6 scrub starts
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 10.6 scrub ok
Jan 26 09:44:11 compute-0 ceph-mon[74456]: osdmap e85: 3 total, 3 up, 3 in
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 11.0 scrub starts
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 11.0 scrub ok
Jan 26 09:44:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 26 09:44:11 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 12.1f deep-scrub starts
Jan 26 09:44:11 compute-0 ceph-mon[74456]: 12.1f deep-scrub ok
Jan 26 09:44:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 86 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=85/86 n=6 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[63,85)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 86 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=85/86 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[63,85)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:11 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 26 09:44:11 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 26 09:44:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 26 09:44:12 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.zllcia(active, since 95s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:12 compute-0 sshd-session[93068]: Connection closed by 192.168.122.100 port 54002
Jan 26 09:44:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003f50 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:12 compute-0 sshd-session[93038]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 26 09:44:12 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 26 09:44:12 compute-0 systemd[1]: session-35.scope: Consumed 48.564s CPU time.
Jan 26 09:44:12 compute-0 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Jan 26 09:44:12 compute-0 systemd-logind[787]: Removed session 35.
Jan 26 09:44:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setuser ceph since I am not root
Jan 26 09:44:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ignoring --setgroup ceph since I am not root
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: pidfile_write: ignore empty --pid-file
Jan 26 09:44:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:12.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'alerts'
Jan 26 09:44:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:12.243+0000 7ff159550140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'balancer'
Jan 26 09:44:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 26 09:44:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:12.325+0000 7ff159550140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 09:44:12 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'cephadm'
Jan 26 09:44:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:12.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 26 09:44:12 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=9 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=87) [1]/[0] r=0 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=9 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=87) [1]/[0] r=0 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=85/86 n=6 ec=63/48 lis/c=85/63 les/c/f=86/65/0 sis=87 pruub=14.860494614s) [2] async=[2] r=-1 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 244.987625122s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.9( v 60'1159 (0'0,60'1159] local-lis/les=85/86 n=6 ec=63/48 lis/c=85/63 les/c/f=86/65/0 sis=87 pruub=14.860408783s) [2] r=-1 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 244.987625122s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=85/86 n=5 ec=63/48 lis/c=85/63 les/c/f=86/65/0 sis=87 pruub=14.859416962s) [2] async=[2] r=-1 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 244.987655640s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=85/86 n=5 ec=63/48 lis/c=85/63 les/c/f=86/65/0 sis=87 pruub=14.859327316s) [2] r=-1 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 244.987655640s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=87) [1]/[0] r=0 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:12 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 87 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=87) [1]/[0] r=0 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:12 compute-0 ceph-mon[74456]: pgmap v79: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 11.8 scrub starts
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 11.8 scrub ok
Jan 26 09:44:12 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 26 09:44:12 compute-0 ceph-mon[74456]: osdmap e86: 3 total, 3 up, 3 in
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 11.1f scrub starts
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 11.1f scrub ok
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 11.17 scrub starts
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 11.17 scrub ok
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 10.1c scrub starts
Jan 26 09:44:12 compute-0 ceph-mon[74456]: 10.1c scrub ok
Jan 26 09:44:12 compute-0 ceph-mon[74456]: from='mgr.14454 192.168.122.100:0/1534630975' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 26 09:44:12 compute-0 ceph-mon[74456]: mgrmap e27: compute-0.zllcia(active, since 95s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:12 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 26 09:44:12 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 26 09:44:12 compute-0 sshd-session[101334]: Invalid user admin from 157.245.76.178 port 50970
Jan 26 09:44:12 compute-0 sshd-session[101334]: Connection closed by invalid user admin 157.245.76.178 port 50970 [preauth]
Jan 26 09:44:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'crash'
Jan 26 09:44:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:13.189+0000 7ff159550140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:44:13 compute-0 ceph-mgr[74755]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 09:44:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'dashboard'
Jan 26 09:44:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 26 09:44:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 26 09:44:13 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 26 09:44:13 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 88 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=87/88 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=87) [1]/[0] async=[1] r=0 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:13 compute-0 ceph-mon[74456]: osdmap e87: 3 total, 3 up, 3 in
Jan 26 09:44:13 compute-0 ceph-mon[74456]: 8.1d scrub starts
Jan 26 09:44:13 compute-0 ceph-mon[74456]: 8.1d scrub ok
Jan 26 09:44:13 compute-0 ceph-mon[74456]: 8.5 scrub starts
Jan 26 09:44:13 compute-0 ceph-mon[74456]: 8.5 scrub ok
Jan 26 09:44:13 compute-0 ceph-mon[74456]: 10.1d scrub starts
Jan 26 09:44:13 compute-0 ceph-mon[74456]: 10.1d scrub ok
Jan 26 09:44:13 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 88 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=87/88 n=9 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=87) [1]/[0] async=[1] r=0 lpr=87 pi=[63,87)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'devicehealth'
Jan 26 09:44:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:13.840+0000 7ff159550140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:44:13 compute-0 ceph-mgr[74755]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 09:44:13 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 09:44:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 09:44:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 09:44:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]:   from numpy import show_config as show_numpy_config
Jan 26 09:44:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:13.999+0000 7ff159550140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'influx'
Jan 26 09:44:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:14 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:14.075+0000 7ff159550140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'insights'
Jan 26 09:44:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:14 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:14.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'iostat'
Jan 26 09:44:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:14 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4003f50 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:14.220+0000 7ff159550140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'k8sevents'
Jan 26 09:44:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 26 09:44:14 compute-0 ceph-mon[74456]: osdmap e88: 3 total, 3 up, 3 in
Jan 26 09:44:14 compute-0 ceph-mon[74456]: 12.1e scrub starts
Jan 26 09:44:14 compute-0 ceph-mon[74456]: 12.1e scrub ok
Jan 26 09:44:14 compute-0 ceph-mon[74456]: 12.1b scrub starts
Jan 26 09:44:14 compute-0 ceph-mon[74456]: 12.1b scrub ok
Jan 26 09:44:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 26 09:44:14 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 26 09:44:14 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 89 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=87/88 n=9 ec=63/48 lis/c=87/63 les/c/f=88/65/0 sis=89 pruub=14.957056046s) [1] async=[1] r=-1 lpr=89 pi=[63,89)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 247.190963745s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:14 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 89 pg[9.a( v 60'1159 (0'0,60'1159] local-lis/les=87/88 n=9 ec=63/48 lis/c=87/63 les/c/f=88/65/0 sis=89 pruub=14.956928253s) [1] r=-1 lpr=89 pi=[63,89)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.190963745s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:14 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 89 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=87/88 n=5 ec=63/48 lis/c=87/63 les/c/f=88/65/0 sis=89 pruub=14.952177048s) [1] async=[1] r=-1 lpr=89 pi=[63,89)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 247.187301636s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:14 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 89 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=87/88 n=5 ec=63/48 lis/c=87/63 les/c/f=88/65/0 sis=89 pruub=14.952108383s) [1] r=-1 lpr=89 pi=[63,89)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.187301636s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'localpool'
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'mirroring'
Jan 26 09:44:14 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'nfs'
Jan 26 09:44:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:15.224+0000 7ff159550140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'orchestrator'
Jan 26 09:44:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:15.434+0000 7ff159550140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 09:44:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:15.509+0000 7ff159550140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'osd_support'
Jan 26 09:44:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:15.576+0000 7ff159550140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 09:44:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 26 09:44:15 compute-0 ceph-mon[74456]: osdmap e89: 3 total, 3 up, 3 in
Jan 26 09:44:15 compute-0 ceph-mon[74456]: 12.2 scrub starts
Jan 26 09:44:15 compute-0 ceph-mon[74456]: 12.2 scrub ok
Jan 26 09:44:15 compute-0 ceph-mon[74456]: 12.16 scrub starts
Jan 26 09:44:15 compute-0 ceph-mon[74456]: 12.16 scrub ok
Jan 26 09:44:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 26 09:44:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:15.657+0000 7ff159550140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'progress'
Jan 26 09:44:15 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 26 09:44:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:15.729+0000 7ff159550140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 09:44:15 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'prometheus'
Jan 26 09:44:15 compute-0 sshd-session[101349]: Accepted publickey for zuul from 192.168.122.30 port 58784 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:44:15 compute-0 systemd-logind[787]: New session 37 of user zuul.
Jan 26 09:44:15 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 26 09:44:15 compute-0 sshd-session[101349]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:44:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:16 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:16 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:16.109+0000 7ff159550140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:44:16 compute-0 ceph-mgr[74755]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 09:44:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rbd_support'
Jan 26 09:44:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:16.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:16 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4000b60 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:16.218+0000 7ff159550140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:44:16 compute-0 ceph-mgr[74755]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 09:44:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'restful'
Jan 26 09:44:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:16.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rgw'
Jan 26 09:44:16 compute-0 ceph-mon[74456]: osdmap e90: 3 total, 3 up, 3 in
Jan 26 09:44:16 compute-0 ceph-mon[74456]: 11.19 scrub starts
Jan 26 09:44:16 compute-0 ceph-mon[74456]: 11.19 scrub ok
Jan 26 09:44:16 compute-0 ceph-mon[74456]: 12.14 scrub starts
Jan 26 09:44:16 compute-0 ceph-mon[74456]: 12.14 scrub ok
Jan 26 09:44:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:16.709+0000 7ff159550140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:44:16 compute-0 ceph-mgr[74755]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 09:44:16 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'rook'
Jan 26 09:44:16 compute-0 python3.9[101505]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:44:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:17.318+0000 7ff159550140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'selftest'
Jan 26 09:44:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:17.409+0000 7ff159550140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'snap_schedule'
Jan 26 09:44:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:17.490+0000 7ff159550140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'stats'
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'status'
Jan 26 09:44:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:17.645+0000 7ff159550140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telegraf'
Jan 26 09:44:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:17.719+0000 7ff159550140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'telemetry'
Jan 26 09:44:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:17.879+0000 7ff159550140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 09:44:17 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 09:44:18 compute-0 ceph-mon[74456]: 10.f scrub starts
Jan 26 09:44:18 compute-0 ceph-mon[74456]: 10.f scrub ok
Jan 26 09:44:18 compute-0 ceph-mon[74456]: 12.1 scrub starts
Jan 26 09:44:18 compute-0 ceph-mon[74456]: 12.1 scrub ok
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:18 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:18 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.119+0000 7ff159550140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'volumes'
Jan 26 09:44:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:18.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:18 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 09:44:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:18.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.400+0000 7ff159550140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr[py] Loading python module 'zabbix'
Jan 26 09:44:18 compute-0 sudo[101719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lghpvwgoabqbquewzgaomdymknpdklrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420658.0084815-51-177447738257263/AnsiballZ_command.py'
Jan 26 09:44:18 compute-0 sudo[101719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.472+0000 7ff159550140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zllcia restarted
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: ms_deliver_dispatch: unhandled message 0x55f9cc945860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zllcia
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr handle_mgr_map Activating!
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr handle_mgr_map I am now activating
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.zllcia(active, starting, since 0.0459661s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.zhqpiu"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zhqpiu"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e10 all = 0
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.rbkelk"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.rbkelk"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e10 all = 0
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.zprrum"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zprrum"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e10 all = 0
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).mds e10 all = 1
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: balancer
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : Manager daemon compute-0.zllcia is now available
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Starting
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:44:18
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: cephadm
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: crash
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: dashboard
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: devicehealth
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: iostat
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Starting
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: nfs
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO sso] Loading SSO DB version=1
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: orchestrator
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti restarted
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.xammti started
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: pg_autoscaler
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: progress
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [progress INFO root] Loading...
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7ff0d88812b0>, <progress.module.GhostEvent object at 0x7ff0d88812e0>, <progress.module.GhostEvent object at 0x7ff0d8881310>, <progress.module.GhostEvent object at 0x7ff0d8881340>, <progress.module.GhostEvent object at 0x7ff0d8881370>, <progress.module.GhostEvent object at 0x7ff0d88813a0>, <progress.module.GhostEvent object at 0x7ff0d88813d0>, <progress.module.GhostEvent object at 0x7ff0d8881400>, <progress.module.GhostEvent object at 0x7ff0d8881430>, <progress.module.GhostEvent object at 0x7ff0d8881460>, <progress.module.GhostEvent object at 0x7ff0d8881490>, <progress.module.GhostEvent object at 0x7ff0d88814c0>, <progress.module.GhostEvent object at 0x7ff0d88814f0>, <progress.module.GhostEvent object at 0x7ff0d8881520>, <progress.module.GhostEvent object at 0x7ff0d8881550>, <progress.module.GhostEvent object at 0x7ff0d8881580>, <progress.module.GhostEvent object at 0x7ff0d88815b0>, <progress.module.GhostEvent object at 0x7ff0d88815e0>, <progress.module.GhostEvent object at 0x7ff0d8881610>, <progress.module.GhostEvent object at 0x7ff0d8881640>, <progress.module.GhostEvent object at 0x7ff0d8881670>, <progress.module.GhostEvent object at 0x7ff0d88816a0>, <progress.module.GhostEvent object at 0x7ff0d88816d0>, <progress.module.GhostEvent object at 0x7ff0d8881700>, <progress.module.GhostEvent object at 0x7ff0d8881730>, <progress.module.GhostEvent object at 0x7ff0d8881760>] historic events
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [progress INFO root] Loaded OSDMap, ready.
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: prometheus
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO root] server_addr: :: server_port: 9283
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO root] Cache enabled
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO root] starting metric collection thread
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO root] Starting engine...
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:18] ENGINE Bus STARTING
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:18] ENGINE Bus STARTING
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: CherryPy Checker:
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: The Application mounted at '' has an empty config.
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 
Jan 26 09:44:18 compute-0 python3.9[101721]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] recovery thread starting
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] starting setup
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: rbd_support
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: restful
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: status
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: telemetry
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [restful INFO root] server_addr: :: server_port: 8003
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [restful WARNING root] server not running: no certificate configured
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] PerfHandler: starting
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: mgr load Constructed class from module: volumes
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.oynaeu started
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TaskHandler: starting
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.781+0000 7ff0c145e640 -1 client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"} v 0)
Jan 26 09:44:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.783+0000 7ff0c6468640 -1 client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.783+0000 7ff0c6468640 -1 client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.783+0000 7ff0c6468640 -1 client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.783+0000 7ff0c6468640 -1 client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T09:44:18.783+0000 7ff0c6468640 -1 client.0 error registering admin socket command: (17) File exists
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] setup complete
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:18] ENGINE Serving on http://:::9283
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:18] ENGINE Serving on http://:::9283
Jan 26 09:44:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:18] ENGINE Bus STARTED
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:18] ENGINE Bus STARTED
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [prometheus INFO root] Engine started.
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 26 09:44:18 compute-0 sshd-session[101884]: Accepted publickey for ceph-admin from 192.168.122.100 port 55998 ssh2: RSA SHA256:cGz1g5qmzBfeiAiDRElnaAonZh1cdMIZMAXyGkEzbws
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 26 09:44:18 compute-0 systemd-logind[787]: New session 38 of user ceph-admin.
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 26 09:44:18 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 26 09:44:18 compute-0 sshd-session[101884]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 26 09:44:18 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 26 09:44:19 compute-0 ceph-mon[74456]: 12.3 scrub starts
Jan 26 09:44:19 compute-0 ceph-mon[74456]: 12.3 scrub ok
Jan 26 09:44:19 compute-0 ceph-mon[74456]: 11.1a scrub starts
Jan 26 09:44:19 compute-0 ceph-mon[74456]: 11.1a scrub ok
Jan 26 09:44:19 compute-0 ceph-mon[74456]: Active manager daemon compute-0.zllcia restarted
Jan 26 09:44:19 compute-0 ceph-mon[74456]: Activating manager daemon compute-0.zllcia
Jan 26 09:44:19 compute-0 ceph-mon[74456]: osdmap e91: 3 total, 3 up, 3 in
Jan 26 09:44:19 compute-0 ceph-mon[74456]: mgrmap e28: compute-0.zllcia(active, starting, since 0.0459661s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.zhqpiu"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.rbkelk"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zprrum"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-0.zllcia", "id": "compute-0.zllcia"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-1.xammti", "id": "compute-1.xammti"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mgr metadata", "who": "compute-2.oynaeu", "id": "compute-2.oynaeu"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: Manager daemon compute-0.zllcia is now available
Jan 26 09:44:19 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti restarted
Jan 26 09:44:19 compute-0 ceph-mon[74456]: Standby manager daemon compute-1.xammti started
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/mirror_snapshot_schedule"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu restarted
Jan 26 09:44:19 compute-0 ceph-mon[74456]: Standby manager daemon compute-2.oynaeu started
Jan 26 09:44:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zllcia/trash_purge_schedule"}]: dispatch
Jan 26 09:44:19 compute-0 ceph-mon[74456]: 11.1e deep-scrub starts
Jan 26 09:44:19 compute-0 sudo[101899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:19 compute-0 sudo[101899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:19 compute-0 sudo[101899]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [dashboard INFO dashboard.module] Engine started.
Jan 26 09:44:19 compute-0 sudo[101928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:44:19 compute-0 sudo[101928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:19 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 26 09:44:19 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 26 09:44:19 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.zllcia(active, since 1.07102s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:19 compute-0 podman[102024]: 2026-01-26 09:44:19.84549733 +0000 UTC m=+0.055078824 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:44:19] ENGINE Bus STARTING
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:44:19] ENGINE Bus STARTING
Jan 26 09:44:19 compute-0 podman[102024]: 2026-01-26 09:44:19.932987243 +0000 UTC m=+0.142568757 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:44:19] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:44:19 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:44:19] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:44:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:20 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:20 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:44:20] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:44:20] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:44:20] ENGINE Bus STARTED
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:44:20] ENGINE Bus STARTED
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: [cephadm INFO cherrypy.error] [26/Jan/2026:09:44:20] ENGINE Client ('192.168.122.100', 36132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : [26/Jan/2026:09:44:20] ENGINE Client ('192.168.122.100', 36132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:44:20 compute-0 ceph-mon[74456]: 12.9 scrub starts
Jan 26 09:44:20 compute-0 ceph-mon[74456]: 12.9 scrub ok
Jan 26 09:44:20 compute-0 ceph-mon[74456]: 11.1e deep-scrub ok
Jan 26 09:44:20 compute-0 ceph-mon[74456]: 8.1e scrub starts
Jan 26 09:44:20 compute-0 ceph-mon[74456]: 8.1e scrub ok
Jan 26 09:44:20 compute-0 ceph-mon[74456]: mgrmap e29: compute-0.zllcia(active, since 1.07102s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:20.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:20 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:20 compute-0 podman[102164]: 2026-01-26 09:44:20.365119028 +0000 UTC m=+0.058891553 container exec 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:20.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:20 compute-0 podman[102164]: 2026-01-26 09:44:20.403663809 +0000 UTC m=+0.097436314 container exec_died 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:20 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 26 09:44:20 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 26 09:44:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 09:44:20 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 09:44:20 compute-0 podman[102265]: 2026-01-26 09:44:20.789361274 +0000 UTC m=+0.058569084 container exec d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:44:20 compute-0 podman[102265]: 2026-01-26 09:44:20.802360772 +0000 UTC m=+0.071568632 container exec_died d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:21 compute-0 podman[102326]: 2026-01-26 09:44:21.042425492 +0000 UTC m=+0.070938314 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:44:21 compute-0 podman[102326]: 2026-01-26 09:44:21.050800906 +0000 UTC m=+0.079313738 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 8.d scrub starts
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 8.d scrub ok
Jan 26 09:44:21 compute-0 ceph-mon[74456]: [26/Jan/2026:09:44:19] ENGINE Bus STARTING
Jan 26 09:44:21 compute-0 ceph-mon[74456]: [26/Jan/2026:09:44:19] ENGINE Serving on http://192.168.122.100:8765
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 11.1c scrub starts
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 11.1c scrub ok
Jan 26 09:44:21 compute-0 ceph-mon[74456]: [26/Jan/2026:09:44:20] ENGINE Serving on https://192.168.122.100:7150
Jan 26 09:44:21 compute-0 ceph-mon[74456]: [26/Jan/2026:09:44:20] ENGINE Bus STARTED
Jan 26 09:44:21 compute-0 ceph-mon[74456]: [26/Jan/2026:09:44:20] ENGINE Client ('192.168.122.100', 36132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 8.1a scrub starts
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 8.1a scrub ok
Jan 26 09:44:21 compute-0 ceph-mon[74456]: pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 8.1b scrub starts
Jan 26 09:44:21 compute-0 ceph-mon[74456]: 8.1b scrub ok
Jan 26 09:44:21 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.zllcia(active, since 2s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:44:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:44:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:21 compute-0 podman[102392]: 2026-01-26 09:44:21.278420684 +0000 UTC m=+0.050893431 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Jan 26 09:44:21 compute-0 podman[102392]: 2026-01-26 09:44:21.289582038 +0000 UTC m=+0.062054685 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, io.openshift.expose-services=, name=keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 26 09:44:21 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 26 09:44:21 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 26 09:44:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 26 09:44:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:44:21 compute-0 podman[102454]: 2026-01-26 09:44:21.83614591 +0000 UTC m=+0.412120634 container exec c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:21 compute-0 podman[102454]: 2026-01-26 09:44:21.874703352 +0000 UTC m=+0.450677946 container exec_died c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:22 compute-0 podman[102528]: 2026-01-26 09:44:22.186266222 +0000 UTC m=+0.058057870 container exec 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:44:22 compute-0 podman[102528]: 2026-01-26 09:44:22.363300789 +0000 UTC m=+0.235092427 container exec_died 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:22.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v5: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 26 09:44:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 09:44:22 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 26 09:44:22 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 09:44:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 26 09:44:23 compute-0 ceph-mon[74456]: 8.3 deep-scrub starts
Jan 26 09:44:23 compute-0 ceph-mon[74456]: 8.3 deep-scrub ok
Jan 26 09:44:23 compute-0 ceph-mon[74456]: mgrmap e30: compute-0.zllcia(active, since 2s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 ceph-mon[74456]: 11.18 scrub starts
Jan 26 09:44:23 compute-0 ceph-mon[74456]: 11.18 scrub ok
Jan 26 09:44:23 compute-0 ceph-mon[74456]: 11.7 scrub starts
Jan 26 09:44:23 compute-0 ceph-mon[74456]: 11.7 scrub ok
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 26 09:44:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:44:23 compute-0 podman[102648]: 2026-01-26 09:44:23.327638049 +0000 UTC m=+0.634191822 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.zllcia(active, since 5s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:23 compute-0 podman[102680]: 2026-01-26 09:44:23.537443609 +0000 UTC m=+0.144060810 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:23 compute-0 podman[102648]: 2026-01-26 09:44:23.544950027 +0000 UTC m=+0.851503750 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:44:23 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 26 09:44:23 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 26 09:44:23 compute-0 sudo[101928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:23 compute-0 sudo[102693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:23 compute-0 sudo[102693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:23 compute-0 sudo[102693]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:23 compute-0 sudo[102718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:44:23 compute-0 sudo[102718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb9140034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:24 compute-0 sudo[102718]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:24 compute-0 sudo[102778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:24 compute-0 sudo[102778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:24 compute-0 sudo[102778]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:24.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:24 compute-0 sudo[102803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 26 09:44:24 compute-0 sudo[102803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v7: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:24 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Jan 26 09:44:24 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Jan 26 09:44:24 compute-0 sudo[102803]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:44:25 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.e scrub starts
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 10.1 scrub starts
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 10.1 scrub ok
Jan 26 09:44:25 compute-0 ceph-mon[74456]: pgmap v5: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 10.5 scrub starts
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 10.5 scrub ok
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 8.2 scrub starts
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 8.2 scrub ok
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 11.4 scrub starts
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 11.4 scrub ok
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: osdmap e92: 3 total, 3 up, 3 in
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mgrmap e31: compute-0.zllcia(active, since 5s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 10.13 scrub starts
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 10.13 scrub ok
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 11.1b scrub starts
Jan 26 09:44:25 compute-0 ceph-mon[74456]: 11.1b scrub ok
Jan 26 09:44:25 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.e scrub ok
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:44:25 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:44:25 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:44:25 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:44:25 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:44:25 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:44:25 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:44:25 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:44:25 compute-0 sudo[102847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:44:25 compute-0 sudo[102847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:25 compute-0 sudo[102847]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:25 compute-0 sudo[102872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:44:25 compute-0 sudo[102872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:25 compute-0 sudo[102872]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:25 compute-0 sudo[102897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:44:25 compute-0 sudo[102897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:25 compute-0 sudo[102897]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:25 compute-0 sudo[102922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:25 compute-0 sudo[102922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:25 compute-0 sudo[102922]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:26 compute-0 sudo[102947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:44:26 compute-0 sudo[102947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[102947]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:26.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:26 compute-0 sudo[102995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:44:26 compute-0 sudo[102995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[102995]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:26 compute-0 sudo[103020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new
Jan 26 09:44:26 compute-0 sudo[103020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103020]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 sudo[103047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 26 09:44:26 compute-0 sudo[103047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103047]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 09:44:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:26.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:26 compute-0 sudo[103072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:44:26 compute-0 sudo[103072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103072]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 sudo[103097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:44:26 compute-0 sudo[103097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103097]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 8 op/s
Jan 26 09:44:26 compute-0 sudo[103122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:44:26 compute-0 sudo[103122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103122]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.b scrub starts
Jan 26 09:44:26 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.b scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 26 09:44:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 09:44:26 compute-0 sudo[103147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:26 compute-0 sudo[103147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103147]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:26] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:26] "GET /metrics HTTP/1.1" 200 46583 "" "Prometheus/2.51.0"
Jan 26 09:44:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 26 09:44:26 compute-0 sudo[103172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:44:26 compute-0 sudo[103172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103172]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:26 compute-0 sudo[103221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:44:26 compute-0 sudo[103221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103221]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 sudo[103246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new
Jan 26 09:44:26 compute-0 sudo[103246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103246]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 sudo[103271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:26 compute-0 sudo[103271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:26 compute-0 sudo[103271]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:26 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 11.3 scrub starts
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 11.3 scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: pgmap v7: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 12.12 scrub starts
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 12.4 scrub starts
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 12.12 scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 12.4 scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 8.4 scrub starts
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 8.4 scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 12.e scrub starts
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 8.9 scrub starts
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 8.9 scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 12.e scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:26 compute-0 ceph-mon[74456]: osdmap e93: 3 total, 3 up, 3 in
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:44:26 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.conf
Jan 26 09:44:26 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.conf
Jan 26 09:44:26 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.conf
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 11.5 scrub starts
Jan 26 09:44:26 compute-0 ceph-mon[74456]: 11.5 scrub ok
Jan 26 09:44:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 09:44:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 09:44:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 09:44:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 26 09:44:26 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 26 09:44:27 compute-0 sudo[103301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 26 09:44:27 compute-0 sudo[103301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103301]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph
Jan 26 09:44:27 compute-0 sudo[103326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103326]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 sudo[103351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103351]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:27 compute-0 sudo[103376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103376]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 sudo[103401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103401]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103451]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103476]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 sudo[103502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103502]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 sudo[103527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:44:27 compute-0 sudo[103527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103527]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.c deep-scrub starts
Jan 26 09:44:27 compute-0 sudo[103552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config
Jan 26 09:44:27 compute-0 sudo[103552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:27 compute-0 sudo[103552]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103577]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:44:27 compute-0 sudo[103602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:27 compute-0 sudo[103602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103602]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103627]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:27 compute-0 sudo[101719]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103675]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:27 compute-0 sudo[103724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new
Jan 26 09:44:27 compute-0 sudo[103724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:27 compute-0 sudo[103724]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:28 compute-0 sudo[103749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1a70b85d-e3fd-5814-8a6a-37ea00fcae30/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring.new /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:28 compute-0 sudo[103749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:28 compute-0 sudo[103749]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:28 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.c deep-scrub ok
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:44:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:28 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:28 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.conf
Jan 26 09:44:28 compute-0 ceph-mon[74456]: pgmap v9: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 8 op/s
Jan 26 09:44:28 compute-0 ceph-mon[74456]: 12.b scrub starts
Jan 26 09:44:28 compute-0 ceph-mon[74456]: 12.b scrub ok
Jan 26 09:44:28 compute-0 ceph-mon[74456]: 8.1c scrub starts
Jan 26 09:44:28 compute-0 ceph-mon[74456]: 8.1c scrub ok
Jan 26 09:44:28 compute-0 ceph-mon[74456]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:28 compute-0 ceph-mon[74456]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:28 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 09:44:28 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 09:44:28 compute-0 ceph-mon[74456]: osdmap e94: 3 total, 3 up, 3 in
Jan 26 09:44:28 compute-0 ceph-mon[74456]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 09:44:28 compute-0 ceph-mon[74456]: 12.c deep-scrub starts
Jan 26 09:44:28 compute-0 sshd-session[101352]: Connection closed by 192.168.122.30 port 58784
Jan 26 09:44:28 compute-0 sshd-session[101349]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:44:28 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 26 09:44:28 compute-0 systemd[1]: session-37.scope: Consumed 8.241s CPU time.
Jan 26 09:44:28 compute-0 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Jan 26 09:44:28 compute-0 systemd-logind[787]: Removed session 37.
Jan 26 09:44:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:28.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:28.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:44:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 10 op/s
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.a scrub starts
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:44:28 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.a scrub ok
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:44:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:44:28 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:28 compute-0 sudo[103776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:28 compute-0 sudo[103776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:28 compute-0 sudo[103776]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:28 compute-0 sudo[103801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:44:28 compute-0 sudo[103801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 11.1 scrub starts
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 11.1 scrub ok
Jan 26 09:44:29 compute-0 ceph-mon[74456]: Updating compute-1:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:29 compute-0 ceph-mon[74456]: Updating compute-0:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 10.4 scrub starts
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 10.4 scrub ok
Jan 26 09:44:29 compute-0 ceph-mon[74456]: Updating compute-2:/var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/config/ceph.client.admin.keyring
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 11.f scrub starts
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 11.f scrub ok
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 12.c deep-scrub ok
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 12.a scrub starts
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 12.a scrub ok
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:44:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 8.8 deep-scrub starts
Jan 26 09:44:29 compute-0 ceph-mon[74456]: 8.8 deep-scrub ok
Jan 26 09:44:29 compute-0 podman[103870]: 2026-01-26 09:44:29.200315336 +0000 UTC m=+0.025772000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:29 compute-0 podman[103870]: 2026-01-26 09:44:29.30812567 +0000 UTC m=+0.133582324 container create af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:29 compute-0 systemd[1]: Started libpod-conmon-af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e.scope.
Jan 26 09:44:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 26 09:44:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:29 compute-0 podman[103870]: 2026-01-26 09:44:29.599454321 +0000 UTC m=+0.424910975 container init af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:44:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 26 09:44:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 26 09:44:29 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 26 09:44:29 compute-0 podman[103870]: 2026-01-26 09:44:29.611721478 +0000 UTC m=+0.437178112 container start af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 09:44:29 compute-0 quirky_sutherland[103887]: 167 167
Jan 26 09:44:29 compute-0 systemd[1]: libpod-af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e.scope: Deactivated successfully.
Jan 26 09:44:29 compute-0 podman[103870]: 2026-01-26 09:44:29.618691991 +0000 UTC m=+0.444148625 container attach af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:44:29 compute-0 podman[103870]: 2026-01-26 09:44:29.619337809 +0000 UTC m=+0.444794443 container died af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d3bba534b8d04622cdee615a8feb0ae1c62b53d85726cd7b783cbe6d70906de-merged.mount: Deactivated successfully.
Jan 26 09:44:29 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Jan 26 09:44:29 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Jan 26 09:44:29 compute-0 podman[103870]: 2026-01-26 09:44:29.748053942 +0000 UTC m=+0.573510576 container remove af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:29 compute-0 systemd[1]: libpod-conmon-af6327425a392c219b4f7a335b661cc92920a329532081744a8ecb2ce6324a5e.scope: Deactivated successfully.
Jan 26 09:44:29 compute-0 podman[103910]: 2026-01-26 09:44:29.906727586 +0000 UTC m=+0.043606469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:30 compute-0 podman[103910]: 2026-01-26 09:44:30.002070998 +0000 UTC m=+0.138949891 container create 7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:44:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:30 compute-0 systemd[1]: Started libpod-conmon-7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1.scope.
Jan 26 09:44:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d301c5140c984289613b311a997cfe8b78d28376888f47b6b296510a9c6b87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d301c5140c984289613b311a997cfe8b78d28376888f47b6b296510a9c6b87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d301c5140c984289613b311a997cfe8b78d28376888f47b6b296510a9c6b87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d301c5140c984289613b311a997cfe8b78d28376888f47b6b296510a9c6b87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d301c5140c984289613b311a997cfe8b78d28376888f47b6b296510a9c6b87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:30 compute-0 podman[103910]: 2026-01-26 09:44:30.125069234 +0000 UTC m=+0.261948187 container init 7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_carver, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:44:30 compute-0 podman[103910]: 2026-01-26 09:44:30.132652875 +0000 UTC m=+0.269531778 container start 7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_carver, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:44:30 compute-0 podman[103910]: 2026-01-26 09:44:30.13765226 +0000 UTC m=+0.274531153 container attach 7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_carver, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:44:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:30.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:30 compute-0 ceph-mon[74456]: pgmap v11: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 10 op/s
Jan 26 09:44:30 compute-0 ceph-mon[74456]: 8.15 scrub starts
Jan 26 09:44:30 compute-0 ceph-mon[74456]: 8.15 scrub ok
Jan 26 09:44:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 26 09:44:30 compute-0 ceph-mon[74456]: osdmap e95: 3 total, 3 up, 3 in
Jan 26 09:44:30 compute-0 ceph-mon[74456]: 12.6 scrub starts
Jan 26 09:44:30 compute-0 ceph-mon[74456]: 12.6 scrub ok
Jan 26 09:44:30 compute-0 ceph-mon[74456]: 8.17 scrub starts
Jan 26 09:44:30 compute-0 ceph-mon[74456]: 8.17 scrub ok
Jan 26 09:44:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:30.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v13: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5 op/s
Jan 26 09:44:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 26 09:44:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 26 09:44:30 compute-0 zen_carver[103927]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:44:30 compute-0 zen_carver[103927]: --> All data devices are unavailable
Jan 26 09:44:30 compute-0 systemd[1]: libpod-7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1.scope: Deactivated successfully.
Jan 26 09:44:30 compute-0 podman[103910]: 2026-01-26 09:44:30.56824064 +0000 UTC m=+0.705119523 container died 7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_carver, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:44:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d301c5140c984289613b311a997cfe8b78d28376888f47b6b296510a9c6b87-merged.mount: Deactivated successfully.
Jan 26 09:44:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 26 09:44:30 compute-0 podman[103910]: 2026-01-26 09:44:30.611323613 +0000 UTC m=+0.748202476 container remove 7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:44:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 26 09:44:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 26 09:44:30 compute-0 systemd[1]: libpod-conmon-7662eca4ed2b525b2da2aa00412d64734427ee6935f96ad1e23b96b26751d5b1.scope: Deactivated successfully.
Jan 26 09:44:30 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 26 09:44:30 compute-0 sudo[103801]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:30 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Jan 26 09:44:30 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Jan 26 09:44:30 compute-0 sudo[103956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:30 compute-0 sudo[103956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:30 compute-0 sudo[103956]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:30 compute-0 sudo[103981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:44:30 compute-0 sudo[103981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:31 compute-0 ceph-mon[74456]: 12.13 scrub starts
Jan 26 09:44:31 compute-0 ceph-mon[74456]: 12.13 scrub ok
Jan 26 09:44:31 compute-0 ceph-mon[74456]: pgmap v13: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 5 op/s
Jan 26 09:44:31 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 26 09:44:31 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 26 09:44:31 compute-0 ceph-mon[74456]: osdmap e96: 3 total, 3 up, 3 in
Jan 26 09:44:31 compute-0 ceph-mon[74456]: 12.1c scrub starts
Jan 26 09:44:31 compute-0 ceph-mon[74456]: 12.1c scrub ok
Jan 26 09:44:31 compute-0 ceph-mon[74456]: 8.14 scrub starts
Jan 26 09:44:31 compute-0 ceph-mon[74456]: 8.14 scrub ok
Jan 26 09:44:31 compute-0 podman[104047]: 2026-01-26 09:44:31.222949128 +0000 UTC m=+0.043467515 container create a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_gagarin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:44:31 compute-0 systemd[1]: Started libpod-conmon-a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb.scope.
Jan 26 09:44:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:31 compute-0 podman[104047]: 2026-01-26 09:44:31.204697857 +0000 UTC m=+0.025216264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:31 compute-0 podman[104047]: 2026-01-26 09:44:31.306147756 +0000 UTC m=+0.126666173 container init a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_gagarin, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:44:31 compute-0 podman[104047]: 2026-01-26 09:44:31.31279474 +0000 UTC m=+0.133313117 container start a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_gagarin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:44:31 compute-0 podman[104047]: 2026-01-26 09:44:31.316035844 +0000 UTC m=+0.136554231 container attach a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_gagarin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 09:44:31 compute-0 admiring_gagarin[104063]: 167 167
Jan 26 09:44:31 compute-0 systemd[1]: libpod-a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb.scope: Deactivated successfully.
Jan 26 09:44:31 compute-0 podman[104047]: 2026-01-26 09:44:31.317887397 +0000 UTC m=+0.138405804 container died a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_gagarin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae8f9ce8a447017fb3f44886d413ba3498efbba9660ef1897827b55e971a1add-merged.mount: Deactivated successfully.
Jan 26 09:44:31 compute-0 podman[104047]: 2026-01-26 09:44:31.350359382 +0000 UTC m=+0.170877769 container remove a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_gagarin, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:44:31 compute-0 systemd[1]: libpod-conmon-a77179e0b5553127a455ba2ff99a1b84fd0a54f9e1db8867e9e0821f0a2494cb.scope: Deactivated successfully.
Jan 26 09:44:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094431 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:44:31 compute-0 podman[104086]: 2026-01-26 09:44:31.483900035 +0000 UTC m=+0.026230684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:31 compute-0 podman[104086]: 2026-01-26 09:44:31.612952317 +0000 UTC m=+0.155282976 container create 9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:44:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 26 09:44:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 26 09:44:31 compute-0 systemd[1]: Started libpod-conmon-9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3.scope.
Jan 26 09:44:31 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 26 09:44:31 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 26 09:44:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5cd21124efaa34e1123bbf9971e18b81c3b05b3b19994db22e621b6cd3018/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5cd21124efaa34e1123bbf9971e18b81c3b05b3b19994db22e621b6cd3018/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5cd21124efaa34e1123bbf9971e18b81c3b05b3b19994db22e621b6cd3018/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5cd21124efaa34e1123bbf9971e18b81c3b05b3b19994db22e621b6cd3018/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:31 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 26 09:44:31 compute-0 podman[104086]: 2026-01-26 09:44:31.751459054 +0000 UTC m=+0.293789733 container init 9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:44:31 compute-0 podman[104086]: 2026-01-26 09:44:31.757884672 +0000 UTC m=+0.300215301 container start 9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_swartz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:44:31 compute-0 podman[104086]: 2026-01-26 09:44:31.761697312 +0000 UTC m=+0.304027981 container attach 9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]: {
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:     "0": [
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:         {
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "devices": [
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "/dev/loop3"
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             ],
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "lv_name": "ceph_lv0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "lv_size": "21470642176",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "name": "ceph_lv0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "tags": {
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.cluster_name": "ceph",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.crush_device_class": "",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.encrypted": "0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.osd_id": "0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.type": "block",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.vdo": "0",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:                 "ceph.with_tpm": "0"
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             },
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "type": "block",
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:             "vg_name": "ceph_vg0"
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:         }
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]:     ]
Jan 26 09:44:32 compute-0 nostalgic_swartz[104102]: }
Jan 26 09:44:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:32 compute-0 systemd[1]: libpod-9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3.scope: Deactivated successfully.
Jan 26 09:44:32 compute-0 podman[104086]: 2026-01-26 09:44:32.069542493 +0000 UTC m=+0.611873122 container died 9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-26d5cd21124efaa34e1123bbf9971e18b81c3b05b3b19994db22e621b6cd3018-merged.mount: Deactivated successfully.
Jan 26 09:44:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:32 compute-0 podman[104086]: 2026-01-26 09:44:32.195934309 +0000 UTC m=+0.738264938 container remove 9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:44:32 compute-0 systemd[1]: libpod-conmon-9dfca110ccc6f9aa7aa8c47207f2e514bceda983d5767ac853e5b51f6930bdc3.scope: Deactivated successfully.
Jan 26 09:44:32 compute-0 sudo[103981]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:32 compute-0 sudo[104125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:32 compute-0 sudo[104125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:32 compute-0 sudo[104125]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:32 compute-0 sudo[104150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:44:32 compute-0 sudo[104150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:32.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v16: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 26 09:44:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 26 09:44:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 26 09:44:32 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Jan 26 09:44:32 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Jan 26 09:44:32 compute-0 ceph-mon[74456]: 12.11 scrub starts
Jan 26 09:44:32 compute-0 ceph-mon[74456]: 12.11 scrub ok
Jan 26 09:44:32 compute-0 ceph-mon[74456]: osdmap e97: 3 total, 3 up, 3 in
Jan 26 09:44:32 compute-0 ceph-mon[74456]: 10.8 scrub starts
Jan 26 09:44:32 compute-0 ceph-mon[74456]: 10.8 scrub ok
Jan 26 09:44:32 compute-0 ceph-mon[74456]: 11.12 scrub starts
Jan 26 09:44:32 compute-0 ceph-mon[74456]: 11.12 scrub ok
Jan 26 09:44:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 26 09:44:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 26 09:44:32 compute-0 podman[104215]: 2026-01-26 09:44:32.831087877 +0000 UTC m=+0.057123173 container create 4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_elgamal, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:32 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 26 09:44:32 compute-0 systemd[1]: Started libpod-conmon-4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73.scope.
Jan 26 09:44:32 compute-0 podman[104215]: 2026-01-26 09:44:32.796291625 +0000 UTC m=+0.022326931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:32 compute-0 podman[104215]: 2026-01-26 09:44:32.977669709 +0000 UTC m=+0.203705015 container init 4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_elgamal, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:44:32 compute-0 podman[104215]: 2026-01-26 09:44:32.985504116 +0000 UTC m=+0.211539442 container start 4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 26 09:44:32 compute-0 podman[104215]: 2026-01-26 09:44:32.989662067 +0000 UTC m=+0.215697373 container attach 4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:44:32 compute-0 eager_elgamal[104231]: 167 167
Jan 26 09:44:32 compute-0 systemd[1]: libpod-4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73.scope: Deactivated successfully.
Jan 26 09:44:32 compute-0 podman[104215]: 2026-01-26 09:44:32.99284668 +0000 UTC m=+0.218882016 container died 4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_elgamal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-133c4a28a7d144032e6720855380d26dbeb24d431aa254de6af8037fe0d491d5-merged.mount: Deactivated successfully.
Jan 26 09:44:33 compute-0 podman[104215]: 2026-01-26 09:44:33.040564468 +0000 UTC m=+0.266599764 container remove 4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_elgamal, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:33 compute-0 systemd[1]: libpod-conmon-4d39471f9493d74574f65df13efa643f213f982c63c65e2b5fa38dc75d6a5c73.scope: Deactivated successfully.
Jan 26 09:44:33 compute-0 podman[104255]: 2026-01-26 09:44:33.168332093 +0000 UTC m=+0.023373251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:33 compute-0 podman[104255]: 2026-01-26 09:44:33.578599411 +0000 UTC m=+0.433640599 container create e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_sinoussi, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 09:44:33 compute-0 systemd[1]: Started libpod-conmon-e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542.scope.
Jan 26 09:44:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:44:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:44:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:33 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.19 deep-scrub starts
Jan 26 09:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1baa193d6b6497a5e68bab690c52dd9d916f3537fa66a3bf5dbf40c9e23e5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1baa193d6b6497a5e68bab690c52dd9d916f3537fa66a3bf5dbf40c9e23e5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1baa193d6b6497a5e68bab690c52dd9d916f3537fa66a3bf5dbf40c9e23e5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d1baa193d6b6497a5e68bab690c52dd9d916f3537fa66a3bf5dbf40c9e23e5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:33 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.19 deep-scrub ok
Jan 26 09:44:33 compute-0 podman[104255]: 2026-01-26 09:44:33.698232469 +0000 UTC m=+0.553273627 container init e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_sinoussi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:44:33 compute-0 podman[104255]: 2026-01-26 09:44:33.707564041 +0000 UTC m=+0.562605219 container start e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_sinoussi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:44:33 compute-0 podman[104255]: 2026-01-26 09:44:33.7133812 +0000 UTC m=+0.568422348 container attach e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 11.16 deep-scrub starts
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 11.16 deep-scrub ok
Jan 26 09:44:33 compute-0 ceph-mon[74456]: pgmap v16: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 11.e scrub starts
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 11.e scrub ok
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 12.8 scrub starts
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 12.8 scrub ok
Jan 26 09:44:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 26 09:44:33 compute-0 ceph-mon[74456]: osdmap e98: 3 total, 3 up, 3 in
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 11.1d scrub starts
Jan 26 09:44:33 compute-0 ceph-mon[74456]: 11.1d scrub ok
Jan 26 09:44:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:44:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 26 09:44:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 26 09:44:33 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 26 09:44:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:34.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:34.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:34 compute-0 lvm[104348]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:44:34 compute-0 lvm[104348]: VG ceph_vg0 finished
Jan 26 09:44:34 compute-0 thirsty_sinoussi[104272]: {}
Jan 26 09:44:34 compute-0 systemd[1]: libpod-e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542.scope: Deactivated successfully.
Jan 26 09:44:34 compute-0 podman[104255]: 2026-01-26 09:44:34.476400867 +0000 UTC m=+1.331442035 container died e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:44:34 compute-0 systemd[1]: libpod-e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542.scope: Consumed 1.275s CPU time.
Jan 26 09:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1baa193d6b6497a5e68bab690c52dd9d916f3537fa66a3bf5dbf40c9e23e5b-merged.mount: Deactivated successfully.
Jan 26 09:44:34 compute-0 podman[104255]: 2026-01-26 09:44:34.520350185 +0000 UTC m=+1.375391323 container remove e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_sinoussi, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:44:34 compute-0 systemd[1]: libpod-conmon-e0086bb75538a5988ba6ce469f6fd212301c2b7e51c112588e31208a250fe542.scope: Deactivated successfully.
Jan 26 09:44:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v19: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 26 09:44:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 26 09:44:34 compute-0 sudo[104150]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 26 09:44:34 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 26 09:44:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:34 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 26 09:44:34 compute-0 sudo[104365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:44:34 compute-0 sudo[104364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:44:34 compute-0 sudo[104365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:34 compute-0 sudo[104364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:34 compute-0 sudo[104365]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:34 compute-0 sudo[104364]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:34 compute-0 ceph-mon[74456]: 8.16 scrub starts
Jan 26 09:44:34 compute-0 ceph-mon[74456]: 8.16 scrub ok
Jan 26 09:44:34 compute-0 ceph-mon[74456]: 12.19 deep-scrub starts
Jan 26 09:44:34 compute-0 ceph-mon[74456]: 12.19 deep-scrub ok
Jan 26 09:44:34 compute-0 ceph-mon[74456]: 8.18 scrub starts
Jan 26 09:44:34 compute-0 ceph-mon[74456]: 8.18 scrub ok
Jan 26 09:44:34 compute-0 ceph-mon[74456]: osdmap e99: 3 total, 3 up, 3 in
Jan 26 09:44:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 26 09:44:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:34 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 26 09:44:34 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 26 09:44:34 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 26 09:44:34 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 26 09:44:34 compute-0 sudo[104414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:34 compute-0 sudo[104414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 26 09:44:34 compute-0 sudo[104414]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:34 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 26 09:44:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 26 09:44:34 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 26 09:44:34 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 100 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=2 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=100 pruub=11.851376534s) [1] r=-1 lpr=100 pi=[63,100)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 264.462982178s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:34 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 100 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=2 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=100 pruub=11.851170540s) [1] r=-1 lpr=100 pi=[63,100)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 264.462982178s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:35 compute-0 sudo[104439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:35 compute-0 sudo[104439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:35 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:44:35 compute-0 podman[104511]: 2026-01-26 09:44:35.532823794 +0000 UTC m=+0.045874845 container died 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-51986ee8e92485b242aae3a1338ba7fcf09f06e9b210689232bc8e53e2a66e03-merged.mount: Deactivated successfully.
Jan 26 09:44:35 compute-0 podman[104511]: 2026-01-26 09:44:35.5725898 +0000 UTC m=+0.085640841 container remove 57a35f5609c036543a7218c3413c7cd92ec725c73b5cc2d0a3c41170bf8442ad (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:35 compute-0 bash[104511]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0
Jan 26 09:44:35 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Jan 26 09:44:35 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Jan 26 09:44:35 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Jan 26 09:44:35 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@node-exporter.compute-0.service: Failed with result 'exit-code'.
Jan 26 09:44:35 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:35 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@node-exporter.compute-0.service: Consumed 1.980s CPU time.
Jan 26 09:44:35 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:44:35 compute-0 ceph-mon[74456]: pgmap v19: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:35 compute-0 ceph-mon[74456]: 8.a scrub starts
Jan 26 09:44:35 compute-0 ceph-mon[74456]: 8.a scrub ok
Jan 26 09:44:35 compute-0 ceph-mon[74456]: 10.2 scrub starts
Jan 26 09:44:35 compute-0 ceph-mon[74456]: 10.2 scrub ok
Jan 26 09:44:35 compute-0 ceph-mon[74456]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 26 09:44:35 compute-0 ceph-mon[74456]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 26 09:44:35 compute-0 ceph-mon[74456]: 11.14 scrub starts
Jan 26 09:44:35 compute-0 ceph-mon[74456]: 11.14 scrub ok
Jan 26 09:44:35 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 26 09:44:35 compute-0 ceph-mon[74456]: osdmap e100: 3 total, 3 up, 3 in
Jan 26 09:44:35 compute-0 podman[104615]: 2026-01-26 09:44:35.951997562 +0000 UTC m=+0.047036978 container create 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 26 09:44:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 26 09:44:35 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 26 09:44:35 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 101 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=2 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=101) [1]/[0] r=0 lpr=101 pi=[63,101)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:35 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 101 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=2 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=101) [1]/[0] r=0 lpr=101 pi=[63,101)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3233069e017d90625ab13b77ac267e8727c5f23f51c49a666106b041156964b0/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:35 compute-0 podman[104615]: 2026-01-26 09:44:35.998942578 +0000 UTC m=+0.093982014 container init 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 podman[104615]: 2026-01-26 09:44:36.003659924 +0000 UTC m=+0.098699310 container start 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 bash[104615]: 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71
Jan 26 09:44:36 compute-0 podman[104615]: 2026-01-26 09:44:35.9305779 +0000 UTC m=+0.025617316 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.009Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.009Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.011Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.011Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.011Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.011Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 26 09:44:36 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=arp
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=bcache
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=bonding
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=cpu
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=dmi
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=edac
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=entropy
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=filefd
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=netclass
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=netdev
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=netstat
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=nfs
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=nvme
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=os
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=pressure
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=rapl
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=selinux
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=softnet
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=stat
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=textfile
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=time
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=uname
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=xfs
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.012Z caller=node_exporter.go:117 level=info collector=zfs
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.014Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0[104630]: ts=2026-01-26T09:44:36.014Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:36 compute-0 sudo[104439]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:36 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:36 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 26 09:44:36 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:36 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 26 09:44:36 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 26 09:44:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:36 compute-0 sudo[104639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:36 compute-0 sudo[104639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:36 compute-0 sudo[104639]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:36 compute-0 sudo[104664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:36 compute-0 sudo[104664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v22: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 3 objects/s recovering
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.568658473 +0000 UTC m=+0.050376126 volume create 4b49a0fd4098542a849b334af684dad72e0f5030208b58375be5127e4605f487
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.578832199 +0000 UTC m=+0.060549842 container create c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_shirley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 systemd[1]: Started libpod-conmon-c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e.scope.
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.54414677 +0000 UTC m=+0.025864493 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 26 09:44:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:36] "GET /metrics HTTP/1.1" 200 48285 "" "Prometheus/2.51.0"
Jan 26 09:44:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:36] "GET /metrics HTTP/1.1" 200 48285 "" "Prometheus/2.51.0"
Jan 26 09:44:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93ade4c6779f547c516f0270b4a913349b163d173999db02a54f1a674c3abe6/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.686081677 +0000 UTC m=+0.167799350 container init c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_shirley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.698663682 +0000 UTC m=+0.180381315 container start c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_shirley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.701978079 +0000 UTC m=+0.183695762 container attach c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_shirley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 priceless_shirley[104726]: 65534 65534
Jan 26 09:44:36 compute-0 systemd[1]: libpod-c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e.scope: Deactivated successfully.
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.704133982 +0000 UTC m=+0.185851625 container died c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_shirley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 26 09:44:36 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 26 09:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e93ade4c6779f547c516f0270b4a913349b163d173999db02a54f1a674c3abe6-merged.mount: Deactivated successfully.
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.743995941 +0000 UTC m=+0.225713574 container remove c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e (image=quay.io/prometheus/alertmanager:v0.25.0, name=priceless_shirley, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 podman[104709]: 2026-01-26 09:44:36.747833403 +0000 UTC m=+0.229551036 volume remove 4b49a0fd4098542a849b334af684dad72e0f5030208b58375be5127e4605f487
Jan 26 09:44:36 compute-0 systemd[1]: libpod-conmon-c5d81a45cdc7792c61164a3847d50ba477b8a7fb26c15205879c4b835e84494e.scope: Deactivated successfully.
Jan 26 09:44:36 compute-0 ceph-mon[74456]: 8.b scrub starts
Jan 26 09:44:36 compute-0 ceph-mon[74456]: 8.b scrub ok
Jan 26 09:44:36 compute-0 ceph-mon[74456]: 10.19 deep-scrub starts
Jan 26 09:44:36 compute-0 ceph-mon[74456]: 10.19 deep-scrub ok
Jan 26 09:44:36 compute-0 ceph-mon[74456]: 8.19 scrub starts
Jan 26 09:44:36 compute-0 ceph-mon[74456]: 8.19 scrub ok
Jan 26 09:44:36 compute-0 ceph-mon[74456]: osdmap e101: 3 total, 3 up, 3 in
Jan 26 09:44:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:36 compute-0 ceph-mon[74456]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 26 09:44:36 compute-0 ceph-mon[74456]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.827149569 +0000 UTC m=+0.048887843 volume create 1d0c5f256ece9eda666e8c55649ac91fd89a730fe6fa3a67f3c75fe3ff20ecc7
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.839650532 +0000 UTC m=+0.061388796 container create 2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sleepy_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 systemd[1]: Started libpod-conmon-2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6.scope.
Jan 26 09:44:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70f6f27818d77e00cc144ccfe976faf9f02c3f112bb0612a8c8ba41c645ba46/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.812777851 +0000 UTC m=+0.034516125 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.916764284 +0000 UTC m=+0.138502558 container init 2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sleepy_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.922090369 +0000 UTC m=+0.143828623 container start 2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sleepy_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.924902591 +0000 UTC m=+0.146640885 container attach 2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sleepy_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 sleepy_golick[104758]: 65534 65534
Jan 26 09:44:36 compute-0 systemd[1]: libpod-2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6.scope: Deactivated successfully.
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.926375814 +0000 UTC m=+0.148114068 container died 2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sleepy_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70f6f27818d77e00cc144ccfe976faf9f02c3f112bb0612a8c8ba41c645ba46-merged.mount: Deactivated successfully.
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.955730117 +0000 UTC m=+0.177468371 container remove 2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6 (image=quay.io/prometheus/alertmanager:v0.25.0, name=sleepy_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:36 compute-0 podman[104742]: 2026-01-26 09:44:36.959495857 +0000 UTC m=+0.181234131 volume remove 1d0c5f256ece9eda666e8c55649ac91fd89a730fe6fa3a67f3c75fe3ff20ecc7
Jan 26 09:44:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 26 09:44:36 compute-0 systemd[1]: libpod-conmon-2b4bca005d72f502887c448b34917f4fcca892d9dc5622e5cc81e2a8949f0af6.scope: Deactivated successfully.
Jan 26 09:44:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 26 09:44:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 26 09:44:36 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 102 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=101/102 n=2 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=101) [1]/[0] async=[1] r=0 lpr=101 pi=[63,101)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:37 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[99303]: ts=2026-01-26T09:44:37.206Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Jan 26 09:44:37 compute-0 podman[104804]: 2026-01-26 09:44:37.217736736 +0000 UTC m=+0.050125509 container died c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-68f28aed2c445474a08ec8e835cf0e36e79dc86a07288c5541255fc051a52b09-merged.mount: Deactivated successfully.
Jan 26 09:44:37 compute-0 podman[104804]: 2026-01-26 09:44:37.263412914 +0000 UTC m=+0.095801637 container remove c4359c311b7c569be419514f7aac4166a74171aef95e4c4175d3ad1795dea38a (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:37 compute-0 podman[104804]: 2026-01-26 09:44:37.267307078 +0000 UTC m=+0.099695821 volume remove 57bd9804c922c4d04bf24455174cb499c61524445e0876249914e58f27264d95
Jan 26 09:44:37 compute-0 bash[104804]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0
Jan 26 09:44:37 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@alertmanager.compute-0.service: Deactivated successfully.
Jan 26 09:44:37 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:37 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@alertmanager.compute-0.service: Consumed 1.017s CPU time.
Jan 26 09:44:37 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:44:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 26 09:44:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 26 09:44:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 26 09:44:37 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 103 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=101/102 n=2 ec=63/48 lis/c=101/63 les/c/f=102/65/0 sis=103 pruub=15.382603645s) [1] async=[1] r=-1 lpr=103 pi=[63,103)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 270.635559082s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:37 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 103 pg[9.10( v 60'1159 (0'0,60'1159] local-lis/les=101/102 n=2 ec=63/48 lis/c=101/63 les/c/f=102/65/0 sis=103 pruub=15.382547379s) [1] r=-1 lpr=103 pi=[63,103)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.635559082s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:37 compute-0 podman[104905]: 2026-01-26 09:44:37.667302247 +0000 UTC m=+0.050423037 volume create e7897728a758f4e6f040967af7814897ecdc6652b4a7a97a4ce9194386ec632c
Jan 26 09:44:37 compute-0 podman[104905]: 2026-01-26 09:44:37.678851413 +0000 UTC m=+0.061972243 container create c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:37 compute-0 systemd[93053]: Starting Mark boot as successful...
Jan 26 09:44:37 compute-0 systemd[93053]: Finished Mark boot as successful.
Jan 26 09:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284d4d4b8be48ce48f7ba5d2ac1e164070ef7ea38fc88a2415ec22ee9b845b68/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284d4d4b8be48ce48f7ba5d2ac1e164070ef7ea38fc88a2415ec22ee9b845b68/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:37 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 26 09:44:37 compute-0 podman[104905]: 2026-01-26 09:44:37.733219914 +0000 UTC m=+0.116340744 container init c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:37 compute-0 podman[104905]: 2026-01-26 09:44:37.642080814 +0000 UTC m=+0.025201694 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 26 09:44:37 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 26 09:44:37 compute-0 podman[104905]: 2026-01-26 09:44:37.739790515 +0000 UTC m=+0.122911305 container start c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:37 compute-0 bash[104905]: c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c
Jan 26 09:44:37 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.766Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.766Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.776Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.778Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 26 09:44:37 compute-0 ceph-mon[74456]: pgmap v22: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 3 objects/s recovering
Jan 26 09:44:37 compute-0 ceph-mon[74456]: 12.18 scrub starts
Jan 26 09:44:37 compute-0 ceph-mon[74456]: 12.18 scrub ok
Jan 26 09:44:37 compute-0 ceph-mon[74456]: 10.18 scrub starts
Jan 26 09:44:37 compute-0 ceph-mon[74456]: 10.18 scrub ok
Jan 26 09:44:37 compute-0 ceph-mon[74456]: 9.d scrub starts
Jan 26 09:44:37 compute-0 ceph-mon[74456]: 9.d scrub ok
Jan 26 09:44:37 compute-0 ceph-mon[74456]: osdmap e102: 3 total, 3 up, 3 in
Jan 26 09:44:37 compute-0 ceph-mon[74456]: osdmap e103: 3 total, 3 up, 3 in
Jan 26 09:44:37 compute-0 sudo[104664]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:37 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 26 09:44:37 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.838Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.839Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.843Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 26 09:44:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:37.843Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 26 09:44:37 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Jan 26 09:44:37 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Jan 26 09:44:37 compute-0 sudo[104942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:37 compute-0 sudo[104942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:37 compute-0 sudo[104942]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:38 compute-0 sudo[104967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
Jan 26 09:44:38 compute-0 sudo[104967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:38.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:38.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:38 compute-0 podman[105010]: 2026-01-26 09:44:38.479400811 +0000 UTC m=+0.054929038 container create 8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1 (image=quay.io/ceph/grafana:10.4.0, name=zen_raman, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 systemd[1]: Started libpod-conmon-8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1.scope.
Jan 26 09:44:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v25: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 3 objects/s recovering
Jan 26 09:44:38 compute-0 podman[105010]: 2026-01-26 09:44:38.45667935 +0000 UTC m=+0.032207597 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 26 09:44:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:38 compute-0 podman[105010]: 2026-01-26 09:44:38.580534331 +0000 UTC m=+0.156062638 container init 8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1 (image=quay.io/ceph/grafana:10.4.0, name=zen_raman, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 podman[105010]: 2026-01-26 09:44:38.590237793 +0000 UTC m=+0.165766020 container start 8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1 (image=quay.io/ceph/grafana:10.4.0, name=zen_raman, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 podman[105010]: 2026-01-26 09:44:38.594509517 +0000 UTC m=+0.170037744 container attach 8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1 (image=quay.io/ceph/grafana:10.4.0, name=zen_raman, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 zen_raman[105026]: 472 0
Jan 26 09:44:38 compute-0 systemd[1]: libpod-8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1.scope: Deactivated successfully.
Jan 26 09:44:38 compute-0 podman[105010]: 2026-01-26 09:44:38.595595839 +0000 UTC m=+0.171124076 container died 8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1 (image=quay.io/ceph/grafana:10.4.0, name=zen_raman, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 26 09:44:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 26 09:44:38 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 26 09:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-71f234234d690ca0b40618260ba1319aad5d912bc8c721a0f7d12ed83f62b965-merged.mount: Deactivated successfully.
Jan 26 09:44:38 compute-0 podman[105010]: 2026-01-26 09:44:38.632131591 +0000 UTC m=+0.207659818 container remove 8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1 (image=quay.io/ceph/grafana:10.4.0, name=zen_raman, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 systemd[1]: libpod-conmon-8045d7cdf90e4050d649f4e3e34c5e76331066e6b21b235d5db8c3e094f57aa1.scope: Deactivated successfully.
Jan 26 09:44:38 compute-0 podman[105042]: 2026-01-26 09:44:38.688906703 +0000 UTC m=+0.039836170 container create 347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12 (image=quay.io/ceph/grafana:10.4.0, name=loving_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 systemd[1]: Started libpod-conmon-347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12.scope.
Jan 26 09:44:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:38 compute-0 podman[105042]: 2026-01-26 09:44:38.758368922 +0000 UTC m=+0.109298399 container init 347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12 (image=quay.io/ceph/grafana:10.4.0, name=loving_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 26 09:44:38 compute-0 podman[105042]: 2026-01-26 09:44:38.764651875 +0000 UTC m=+0.115581312 container start 347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12 (image=quay.io/ceph/grafana:10.4.0, name=loving_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 podman[105042]: 2026-01-26 09:44:38.669962491 +0000 UTC m=+0.020891928 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 26 09:44:38 compute-0 loving_bhabha[105061]: 472 0
Jan 26 09:44:38 compute-0 systemd[1]: libpod-347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12.scope: Deactivated successfully.
Jan 26 09:44:38 compute-0 podman[105042]: 2026-01-26 09:44:38.768650041 +0000 UTC m=+0.119579528 container attach 347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12 (image=quay.io/ceph/grafana:10.4.0, name=loving_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 podman[105042]: 2026-01-26 09:44:38.768898338 +0000 UTC m=+0.119827785 container died 347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12 (image=quay.io/ceph/grafana:10.4.0, name=loving_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 26 09:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-225b061738a938cc74c8dcbf8cebed3b0cd0b7f460a0e1a79647d44b709323fe-merged.mount: Deactivated successfully.
Jan 26 09:44:38 compute-0 podman[105042]: 2026-01-26 09:44:38.806730798 +0000 UTC m=+0.157660275 container remove 347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12 (image=quay.io/ceph/grafana:10.4.0, name=loving_bhabha, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:38 compute-0 ceph-mon[74456]: 12.1a scrub starts
Jan 26 09:44:38 compute-0 ceph-mon[74456]: 12.1a scrub ok
Jan 26 09:44:38 compute-0 ceph-mon[74456]: 10.14 scrub starts
Jan 26 09:44:38 compute-0 ceph-mon[74456]: 10.14 scrub ok
Jan 26 09:44:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:38 compute-0 ceph-mon[74456]: Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 26 09:44:38 compute-0 ceph-mon[74456]: Reconfiguring daemon grafana.compute-0 on compute-0
Jan 26 09:44:38 compute-0 ceph-mon[74456]: 9.1a scrub starts
Jan 26 09:44:38 compute-0 ceph-mon[74456]: 9.1a scrub ok
Jan 26 09:44:38 compute-0 ceph-mon[74456]: osdmap e104: 3 total, 3 up, 3 in
Jan 26 09:44:38 compute-0 systemd[1]: libpod-conmon-347ed50fcf700c9c4ed0d89160f3356ccc70a7f1611c73d6dfdc4d3e47f1ab12.scope: Deactivated successfully.
Jan 26 09:44:38 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:44:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=server t=2026-01-26T09:44:39.050534907Z level=info msg="Shutdown started" reason="System signal: terminated"
Jan 26 09:44:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=ticker t=2026-01-26T09:44:39.051043123Z level=info msg=stopped last_tick=2026-01-26T09:44:30Z
Jan 26 09:44:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=tracing t=2026-01-26T09:44:39.051127625Z level=info msg="Closing tracing"
Jan 26 09:44:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[100026]: logger=grafana-apiserver t=2026-01-26T09:44:39.051493715Z level=info msg="StorageObjectCountTracker pruner is exiting"
Jan 26 09:44:39 compute-0 podman[105111]: 2026-01-26 09:44:39.071444315 +0000 UTC m=+0.054771333 container died 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e52e8977ccad07210ba1d1e5600fd9d902eaa5086306f5d7b89354b0c97196f2-merged.mount: Deactivated successfully.
Jan 26 09:44:39 compute-0 podman[105111]: 2026-01-26 09:44:39.111405028 +0000 UTC m=+0.094732056 container remove 19752b52da5205ecf87a29f7ba2f0a5446dcbf057bedea6661df25a0a9f3af6a (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:39 compute-0 bash[105111]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0
Jan 26 09:44:39 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@grafana.compute-0.service: Deactivated successfully.
Jan 26 09:44:39 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:39 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@grafana.compute-0.service: Consumed 4.393s CPU time.
Jan 26 09:44:39 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:44:39 compute-0 podman[105217]: 2026-01-26 09:44:39.514955501 +0000 UTC m=+0.064922959 container create ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bf79500a5d34ccbd8cff00b95ad4d254233ad6bab18507bd653a286ed0c604/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bf79500a5d34ccbd8cff00b95ad4d254233ad6bab18507bd653a286ed0c604/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bf79500a5d34ccbd8cff00b95ad4d254233ad6bab18507bd653a286ed0c604/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bf79500a5d34ccbd8cff00b95ad4d254233ad6bab18507bd653a286ed0c604/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bf79500a5d34ccbd8cff00b95ad4d254233ad6bab18507bd653a286ed0c604/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:39 compute-0 podman[105217]: 2026-01-26 09:44:39.489428539 +0000 UTC m=+0.039396047 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 26 09:44:39 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Jan 26 09:44:39 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Jan 26 09:44:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:39.778Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000009023s
Jan 26 09:44:39 compute-0 ceph-mon[74456]: pgmap v25: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 3 objects/s recovering
Jan 26 09:44:39 compute-0 ceph-mon[74456]: 9.15 scrub starts
Jan 26 09:44:39 compute-0 ceph-mon[74456]: 9.15 scrub ok
Jan 26 09:44:39 compute-0 ceph-mon[74456]: 10.15 scrub starts
Jan 26 09:44:39 compute-0 ceph-mon[74456]: 10.15 scrub ok
Jan 26 09:44:39 compute-0 ceph-mon[74456]: 9.1e scrub starts
Jan 26 09:44:39 compute-0 ceph-mon[74456]: 9.1e scrub ok
Jan 26 09:44:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:40.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4003c30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:40.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v27: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 26 09:44:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 26 09:44:40 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 26 09:44:40 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 26 09:44:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 26 09:44:40 compute-0 podman[105217]: 2026-01-26 09:44:40.964013366 +0000 UTC m=+1.513980904 container init ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 9.9 scrub starts
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 9.9 scrub ok
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 12.10 scrub starts
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 12.10 scrub ok
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 9.1d scrub starts
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 9.1d scrub ok
Jan 26 09:44:40 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 9.14 scrub starts
Jan 26 09:44:40 compute-0 ceph-mon[74456]: 9.14 scrub ok
Jan 26 09:44:40 compute-0 podman[105217]: 2026-01-26 09:44:40.975928581 +0000 UTC m=+1.525896029 container start ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:40 compute-0 bash[105217]: ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e
Jan 26 09:44:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 26 09:44:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 26 09:44:40 compute-0 systemd[1]: Started Ceph grafana.compute-0 for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:44:40 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 26 09:44:40 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 105 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=105 pruub=13.819399834s) [1] r=-1 lpr=105 pi=[63,105)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 272.463165283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:40 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 105 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=105 pruub=13.819363594s) [1] r=-1 lpr=105 pi=[63,105)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 272.463165283s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:41 compute-0 sudo[104967]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring rgw.rgw.compute-1.fbcidm (unknown last config time)...
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring rgw.rgw.compute-1.fbcidm (unknown last config time)...
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: [cephadm INFO cephadm.serve] Reconfiguring daemon rgw.rgw.compute-1.fbcidm on compute-1
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: log_channel(cephadm) log [INF] : Reconfiguring daemon rgw.rgw.compute-1.fbcidm on compute-1
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.196749312Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-26T09:44:41Z
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.19702401Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.19703156Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197035671Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197039211Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197042461Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197051721Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197055081Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197061571Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197065291Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197068462Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197071602Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197078252Z level=info msg=Target target=[all]
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197084262Z level=info msg="Path Home" path=/usr/share/grafana
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197088202Z level=info msg="Path Data" path=/var/lib/grafana
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197091462Z level=info msg="Path Logs" path=/var/log/grafana
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197094622Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197097872Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=settings t=2026-01-26T09:44:41.197101142Z level=info msg="App mode production"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=sqlstore t=2026-01-26T09:44:41.197521615Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=sqlstore t=2026-01-26T09:44:41.197544736Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=migrator t=2026-01-26T09:44:41.198358049Z level=info msg="Starting DB migrations"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=migrator t=2026-01-26T09:44:41.21452639Z level=info msg="migrations completed" performed=0 skipped=547 duration=471.745µs
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=sqlstore t=2026-01-26T09:44:41.215437436Z level=info msg="Created default organization"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=secrets t=2026-01-26T09:44:41.216209968Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=plugin.store t=2026-01-26T09:44:41.262748421Z level=info msg="Loading plugins..."
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=local.finder t=2026-01-26T09:44:41.35245399Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=plugin.store t=2026-01-26T09:44:41.352487551Z level=info msg="Plugins loaded" count=55 duration=89.74176ms
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=query_data t=2026-01-26T09:44:41.355246211Z level=info msg="Query Service initialization"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=live.push_http t=2026-01-26T09:44:41.359109883Z level=info msg="Live Push Gateway initialization"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=ngalert.migration t=2026-01-26T09:44:41.363594024Z level=info msg=Starting
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=ngalert.state.manager t=2026-01-26T09:44:41.372817142Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=infra.usagestats.collector t=2026-01-26T09:44:41.374444599Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=provisioning.datasources t=2026-01-26T09:44:41.376321404Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=provisioning.alerting t=2026-01-26T09:44:41.397483529Z level=info msg="starting to provision alerting"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=provisioning.alerting t=2026-01-26T09:44:41.397505969Z level=info msg="finished to provision alerting"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafanaStorageLogger t=2026-01-26T09:44:41.397689226Z level=info msg="Storage starting"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=ngalert.state.manager t=2026-01-26T09:44:41.397692346Z level=info msg="Warming state cache for startup"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=ngalert.multiorg.alertmanager t=2026-01-26T09:44:41.397954163Z level=info msg="Starting MultiOrg Alertmanager"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=http.server t=2026-01-26T09:44:41.401817655Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=http.server t=2026-01-26T09:44:41.402502005Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=ngalert.state.manager t=2026-01-26T09:44:41.437297017Z level=info msg="State cache has been initialized" states=0 duration=39.602221ms
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=ngalert.scheduler t=2026-01-26T09:44:41.437350479Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=ticker t=2026-01-26T09:44:41.437510153Z level=info msg=starting first_tick=2026-01-26T09:44:50Z
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=provisioning.dashboard t=2026-01-26T09:44:41.451438438Z level=info msg="starting to provision dashboards"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=provisioning.dashboard t=2026-01-26T09:44:41.466218347Z level=info msg="finished to provision dashboards"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=plugins.update.checker t=2026-01-26T09:44:41.469348999Z level=info msg="Update check succeeded" duration=70.949203ms
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafana.update.checker t=2026-01-26T09:44:41.469779071Z level=info msg="Update check succeeded" duration=72.072055ms
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafana-apiserver t=2026-01-26T09:44:41.610982257Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafana-apiserver t=2026-01-26T09:44:41.611501192Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: [prometheus INFO root] Restarting engine...
Jan 26 09:44:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:41] ENGINE Bus STOPPING
Jan 26 09:44:41 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:41] ENGINE Bus STOPPING
Jan 26 09:44:41 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 26 09:44:41 compute-0 sudo[105260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:41 compute-0 sudo[105260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:41 compute-0 sudo[105260]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:41 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 26 09:44:41 compute-0 sudo[105285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:44:41 compute-0 sudo[105285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 26 09:44:42 compute-0 ceph-mon[74456]: pgmap v27: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:42 compute-0 ceph-mon[74456]: 9.19 scrub starts
Jan 26 09:44:42 compute-0 ceph-mon[74456]: 9.19 scrub ok
Jan 26 09:44:42 compute-0 ceph-mon[74456]: 9.f scrub starts
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 26 09:44:42 compute-0 ceph-mon[74456]: osdmap e105: 3 total, 3 up, 3 in
Jan 26 09:44:42 compute-0 ceph-mon[74456]: 9.f scrub ok
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:42 compute-0 ceph-mon[74456]: Reconfiguring rgw.rgw.compute-1.fbcidm (unknown last config time)...
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fbcidm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:42 compute-0 ceph-mon[74456]: Reconfiguring daemon rgw.rgw.compute-1.fbcidm on compute-1
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 26 09:44:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:42 compute-0 ceph-mon[74456]: 9.c scrub starts
Jan 26 09:44:42 compute-0 ceph-mon[74456]: 9.c scrub ok
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:42] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:42] ENGINE Bus STOPPED
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:42] ENGINE Bus STARTING
Jan 26 09:44:42 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:42] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 26 09:44:42 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:42] ENGINE Bus STOPPED
Jan 26 09:44:42 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:42] ENGINE Bus STARTING
Jan 26 09:44:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:42.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:42] ENGINE Serving on http://:::9283
Jan 26 09:44:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: [26/Jan/2026:09:44:42] ENGINE Bus STARTED
Jan 26 09:44:42 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:42] ENGINE Serving on http://:::9283
Jan 26 09:44:42 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.error] [26/Jan/2026:09:44:42] ENGINE Bus STARTED
Jan 26 09:44:42 compute-0 ceph-mgr[74755]: [prometheus INFO root] Engine started.
Jan 26 09:44:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 26 09:44:42 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 26 09:44:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 106 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=106) [1]/[0] r=0 lpr=106 pi=[63,106)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:42 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 106 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=106) [1]/[0] r=0 lpr=106 pi=[63,106)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:42.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:42 compute-0 podman[105393]: 2026-01-26 09:44:42.517268128 +0000 UTC m=+0.077658978 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:44:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 26 09:44:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 26 09:44:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:42 compute-0 podman[105393]: 2026-01-26 09:44:42.631956063 +0000 UTC m=+0.192346863 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 09:44:42 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 26 09:44:42 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 26 09:44:43 compute-0 ceph-mon[74456]: 9.1b scrub starts
Jan 26 09:44:43 compute-0 ceph-mon[74456]: 9.1b scrub ok
Jan 26 09:44:43 compute-0 ceph-mon[74456]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 26 09:44:43 compute-0 ceph-mon[74456]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 26 09:44:43 compute-0 ceph-mon[74456]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 26 09:44:43 compute-0 ceph-mon[74456]: osdmap e106: 3 total, 3 up, 3 in
Jan 26 09:44:43 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 26 09:44:43 compute-0 ceph-mon[74456]: 9.1 scrub starts
Jan 26 09:44:43 compute-0 ceph-mon[74456]: 9.1 scrub ok
Jan 26 09:44:43 compute-0 podman[105511]: 2026-01-26 09:44:43.207950791 +0000 UTC m=+0.083160939 container exec 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:43 compute-0 podman[105511]: 2026-01-26 09:44:43.240603221 +0000 UTC m=+0.115813319 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 26 09:44:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 26 09:44:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 26 09:44:43 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 26 09:44:43 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 107 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=4 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=107 pruub=11.467128754s) [1] r=-1 lpr=107 pi=[63,107)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 272.463073730s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:43 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 107 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=4 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=107 pruub=11.467007637s) [1] r=-1 lpr=107 pi=[63,107)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 272.463073730s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:43 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 107 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=106/107 n=5 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=106) [1]/[0] async=[1] r=0 lpr=106 pi=[63,106)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:43 compute-0 podman[105602]: 2026-01-26 09:44:43.568276618 +0000 UTC m=+0.060205751 container exec d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:44:43 compute-0 podman[105602]: 2026-01-26 09:44:43.585082587 +0000 UTC m=+0.077011700 container exec_died d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 09:44:43 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 26 09:44:43 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 26 09:44:43 compute-0 podman[105667]: 2026-01-26 09:44:43.816863176 +0000 UTC m=+0.059771019 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:44:43 compute-0 podman[105667]: 2026-01-26 09:44:43.828523506 +0000 UTC m=+0.071431349 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:44:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:43 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:44:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:43 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:44:44 compute-0 ceph-mon[74456]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:44 compute-0 ceph-mon[74456]: 9.18 scrub starts
Jan 26 09:44:44 compute-0 ceph-mon[74456]: 9.18 scrub ok
Jan 26 09:44:44 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 26 09:44:44 compute-0 ceph-mon[74456]: osdmap e107: 3 total, 3 up, 3 in
Jan 26 09:44:44 compute-0 ceph-mon[74456]: 9.0 scrub starts
Jan 26 09:44:44 compute-0 ceph-mon[74456]: 9.0 scrub ok
Jan 26 09:44:44 compute-0 podman[105734]: 2026-01-26 09:44:44.046334988 +0000 UTC m=+0.058222363 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Jan 26 09:44:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4003c30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:44 compute-0 podman[105734]: 2026-01-26 09:44:44.069849952 +0000 UTC m=+0.081737367 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.buildah.version=1.28.2, name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64)
Jan 26 09:44:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:44.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:44 compute-0 podman[105802]: 2026-01-26 09:44:44.297314436 +0000 UTC m=+0.057554745 container exec c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:44 compute-0 podman[105802]: 2026-01-26 09:44:44.329857383 +0000 UTC m=+0.090097572 container exec_died c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 26 09:44:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 26 09:44:44 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 26 09:44:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 108 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=106/107 n=5 ec=63/48 lis/c=106/63 les/c/f=107/65/0 sis=108 pruub=14.996617317s) [1] async=[1] r=-1 lpr=108 pi=[63,108)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 277.000061035s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 108 pg[9.11( v 60'1159 (0'0,60'1159] local-lis/les=106/107 n=5 ec=63/48 lis/c=106/63 les/c/f=107/65/0 sis=108 pruub=14.996538162s) [1] r=-1 lpr=108 pi=[63,108)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 277.000061035s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 108 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=4 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=108) [1]/[0] r=0 lpr=108 pi=[63,108)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:44 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 108 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=63/65 n=4 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=108) [1]/[0] r=0 lpr=108 pi=[63,108)/1 crt=60'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 09:44:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:44.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:44 compute-0 sshd-session[105836]: Accepted publickey for zuul from 192.168.122.30 port 47224 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:44:44 compute-0 systemd-logind[787]: New session 39 of user zuul.
Jan 26 09:44:44 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 26 09:44:44 compute-0 sshd-session[105836]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:44:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 26 09:44:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 26 09:44:44 compute-0 podman[105882]: 2026-01-26 09:44:44.586505705 +0000 UTC m=+0.064024133 container exec ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:44 compute-0 podman[105882]: 2026-01-26 09:44:44.750518374 +0000 UTC m=+0.228036802 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:44:44 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 26 09:44:44 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 26 09:44:45 compute-0 podman[106142]: 2026-01-26 09:44:45.203133774 +0000 UTC m=+0.056189595 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:45 compute-0 python3.9[106129]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 26 09:44:45 compute-0 podman[106142]: 2026-01-26 09:44:45.246627179 +0000 UTC m=+0.099682980 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:44:45 compute-0 sudo[105285]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 26 09:44:45 compute-0 ceph-mon[74456]: 9.7 scrub starts
Jan 26 09:44:45 compute-0 ceph-mon[74456]: 9.7 scrub ok
Jan 26 09:44:45 compute-0 ceph-mon[74456]: osdmap e108: 3 total, 3 up, 3 in
Jan 26 09:44:45 compute-0 ceph-mon[74456]: pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 26 09:44:45 compute-0 ceph-mon[74456]: 9.b scrub starts
Jan 26 09:44:45 compute-0 ceph-mon[74456]: 9.b scrub ok
Jan 26 09:44:45 compute-0 ceph-mon[74456]: 9.2 scrub starts
Jan 26 09:44:45 compute-0 ceph-mon[74456]: 9.2 scrub ok
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 26 09:44:45 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 109 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=108/109 n=4 ec=63/48 lis/c=63/63 les/c/f=65/65/0 sis=108) [1]/[0] async=[1] r=0 lpr=108 pi=[63,108)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:44:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:44:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:45 compute-0 sudo[106258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:45 compute-0 sudo[106258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:45 compute-0 sudo[106258]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:45 compute-0 sudo[106289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:44:45 compute-0 sudo[106289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4003c30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:46.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 09:44:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:46.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 09:44:46 compute-0 podman[106449]: 2026-01-26 09:44:46.407921486 +0000 UTC m=+0.057934747 container create 17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:44:46 compute-0 systemd[1]: Started libpod-conmon-17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e.scope.
Jan 26 09:44:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:46 compute-0 python3.9[106407]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:44:46 compute-0 podman[106449]: 2026-01-26 09:44:46.387684687 +0000 UTC m=+0.037697988 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:46 compute-0 podman[106449]: 2026-01-26 09:44:46.500920309 +0000 UTC m=+0.150933620 container init 17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:44:46 compute-0 podman[106449]: 2026-01-26 09:44:46.510443466 +0000 UTC m=+0.160456727 container start 17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:46 compute-0 podman[106449]: 2026-01-26 09:44:46.513615578 +0000 UTC m=+0.163628829 container attach 17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 26 09:44:46 compute-0 lucid_banach[106465]: 167 167
Jan 26 09:44:46 compute-0 systemd[1]: libpod-17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e.scope: Deactivated successfully.
Jan 26 09:44:46 compute-0 podman[106449]: 2026-01-26 09:44:46.521026064 +0000 UTC m=+0.171039315 container died 17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_banach, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v35: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff4e747985d86b799614988a472ae9a4357e62d7e1bcf8f30b321bb6bbbc16aa-merged.mount: Deactivated successfully.
Jan 26 09:44:46 compute-0 podman[106449]: 2026-01-26 09:44:46.566161997 +0000 UTC m=+0.216175268 container remove 17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_banach, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:44:46 compute-0 systemd[1]: libpod-conmon-17a16f03eb5d1841a555281849b2aef67c937451fc49c82437dc89d37b0e7d3e.scope: Deactivated successfully.
Jan 26 09:44:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:46] "GET /metrics HTTP/1.1" 200 48285 "" "Prometheus/2.51.0"
Jan 26 09:44:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:46] "GET /metrics HTTP/1.1" 200 48285 "" "Prometheus/2.51.0"
Jan 26 09:44:46 compute-0 podman[106492]: 2026-01-26 09:44:46.722262695 +0000 UTC m=+0.048220153 container create a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shannon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:46 compute-0 ceph-mon[74456]: 9.3 scrub starts
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 26 09:44:46 compute-0 ceph-mon[74456]: 9.3 scrub ok
Jan 26 09:44:46 compute-0 ceph-mon[74456]: osdmap e109: 3 total, 3 up, 3 in
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:44:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:44:46 compute-0 systemd[1]: Started libpod-conmon-a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229.scope.
Jan 26 09:44:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 26 09:44:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 26 09:44:46 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 26 09:44:46 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 110 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=108/109 n=4 ec=63/48 lis/c=108/63 les/c/f=109/65/0 sis=110 pruub=14.952144623s) [1] async=[1] r=-1 lpr=110 pi=[63,110)/1 crt=60'1159 lcod 0'0 mlcod 0'0 active pruub 279.390747070s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:44:46 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 110 pg[9.12( v 60'1159 (0'0,60'1159] local-lis/les=108/109 n=4 ec=63/48 lis/c=108/63 les/c/f=109/65/0 sis=110 pruub=14.952089310s) [1] r=-1 lpr=110 pi=[63,110)/1 crt=60'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.390747070s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 09:44:46 compute-0 podman[106492]: 2026-01-26 09:44:46.70178832 +0000 UTC m=+0.027745808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198f0db51d46d024e6e4508582e412ef7a6c57a8b08f5b70c23bc719a4142a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198f0db51d46d024e6e4508582e412ef7a6c57a8b08f5b70c23bc719a4142a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198f0db51d46d024e6e4508582e412ef7a6c57a8b08f5b70c23bc719a4142a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198f0db51d46d024e6e4508582e412ef7a6c57a8b08f5b70c23bc719a4142a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/198f0db51d46d024e6e4508582e412ef7a6c57a8b08f5b70c23bc719a4142a88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:46 compute-0 podman[106492]: 2026-01-26 09:44:46.822879891 +0000 UTC m=+0.148837359 container init a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shannon, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:44:46 compute-0 podman[106492]: 2026-01-26 09:44:46.835459967 +0000 UTC m=+0.161417425 container start a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:44:46 compute-0 podman[106492]: 2026-01-26 09:44:46.838724382 +0000 UTC m=+0.164681840 container attach a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shannon, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:44:47 compute-0 romantic_shannon[106531]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:44:47 compute-0 romantic_shannon[106531]: --> All data devices are unavailable
Jan 26 09:44:47 compute-0 systemd[1]: libpod-a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229.scope: Deactivated successfully.
Jan 26 09:44:47 compute-0 podman[106492]: 2026-01-26 09:44:47.208622817 +0000 UTC m=+0.534580305 container died a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-198f0db51d46d024e6e4508582e412ef7a6c57a8b08f5b70c23bc719a4142a88-merged.mount: Deactivated successfully.
Jan 26 09:44:47 compute-0 podman[106492]: 2026-01-26 09:44:47.268327253 +0000 UTC m=+0.594284751 container remove a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shannon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:47 compute-0 systemd[1]: libpod-conmon-a8e7f6b7be67cc3e78bb877998c282008bcbfa579bc2833e333ffa418ed96229.scope: Deactivated successfully.
Jan 26 09:44:47 compute-0 sudo[106289]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:47 compute-0 sudo[106645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:47 compute-0 sudo[106645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:47 compute-0 sudo[106645]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:47 compute-0 sudo[106721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olulzqkriksfeqviezlwooyktvlwgbdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420686.9991734-88-225862920623697/AnsiballZ_command.py'
Jan 26 09:44:47 compute-0 sudo[106721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:44:47 compute-0 sudo[106693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:44:47 compute-0 sudo[106693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:47 compute-0 python3.9[106733]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:44:47 compute-0 sudo[106721]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:44:47.780Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.00185699s
Jan 26 09:44:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 26 09:44:47 compute-0 ceph-mon[74456]: pgmap v35: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:47 compute-0 ceph-mon[74456]: 9.17 scrub starts
Jan 26 09:44:47 compute-0 ceph-mon[74456]: 9.17 scrub ok
Jan 26 09:44:47 compute-0 ceph-mon[74456]: osdmap e110: 3 total, 3 up, 3 in
Jan 26 09:44:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 26 09:44:47 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 26 09:44:47 compute-0 podman[106805]: 2026-01-26 09:44:47.917869129 +0000 UTC m=+0.035909964 container create ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 26 09:44:47 compute-0 systemd[1]: Started libpod-conmon-ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552.scope.
Jan 26 09:44:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:47 compute-0 podman[106805]: 2026-01-26 09:44:47.902500402 +0000 UTC m=+0.020541267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:48 compute-0 podman[106805]: 2026-01-26 09:44:48.004981633 +0000 UTC m=+0.123022488 container init ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_noyce, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:44:48 compute-0 podman[106805]: 2026-01-26 09:44:48.011842582 +0000 UTC m=+0.129883417 container start ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:44:48 compute-0 podman[106805]: 2026-01-26 09:44:48.014948253 +0000 UTC m=+0.132989088 container attach ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 09:44:48 compute-0 nifty_noyce[106821]: 167 167
Jan 26 09:44:48 compute-0 systemd[1]: libpod-ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552.scope: Deactivated successfully.
Jan 26 09:44:48 compute-0 podman[106805]: 2026-01-26 09:44:48.018489505 +0000 UTC m=+0.136530360 container died ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_noyce, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 09:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c91d23cbdd513ef79b0fe46417f8e8efca40b969ff9f3b37b6e4b7bb54b6f26c-merged.mount: Deactivated successfully.
Jan 26 09:44:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:48 compute-0 podman[106805]: 2026-01-26 09:44:48.066697878 +0000 UTC m=+0.184738713 container remove ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:48 compute-0 systemd[1]: libpod-conmon-ce7e4d2a5e6224f782f00414e34e6e619b2ca3c91ba7338ee72c56f02296d552.scope: Deactivated successfully.
Jan 26 09:44:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:48.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:48 compute-0 podman[106898]: 2026-01-26 09:44:48.261548803 +0000 UTC m=+0.052762645 container create 897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_elion, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:44:48 compute-0 systemd[1]: Started libpod-conmon-897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7.scope.
Jan 26 09:44:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec840a8dedf67ba6944bcea1832e7e2c7274aba4a54bf0c312ff3cc5a4d4432f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec840a8dedf67ba6944bcea1832e7e2c7274aba4a54bf0c312ff3cc5a4d4432f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec840a8dedf67ba6944bcea1832e7e2c7274aba4a54bf0c312ff3cc5a4d4432f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec840a8dedf67ba6944bcea1832e7e2c7274aba4a54bf0c312ff3cc5a4d4432f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:48 compute-0 podman[106898]: 2026-01-26 09:44:48.244769225 +0000 UTC m=+0.035983087 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:48 compute-0 podman[106898]: 2026-01-26 09:44:48.342452376 +0000 UTC m=+0.133666228 container init 897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_elion, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:44:48 compute-0 podman[106898]: 2026-01-26 09:44:48.35157568 +0000 UTC m=+0.142789552 container start 897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 09:44:48 compute-0 podman[106898]: 2026-01-26 09:44:48.355302929 +0000 UTC m=+0.146516771 container attach 897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_elion, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 09:44:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:48.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:48 compute-0 sudo[106994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyuzrkrsyeraeyukvdbmlttbogzzvedi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420688.0793312-124-204837970856452/AnsiballZ_stat.py'
Jan 26 09:44:48 compute-0 sudo[106994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:44:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v38: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:44:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:44:48 compute-0 laughing_elion[106916]: {
Jan 26 09:44:48 compute-0 laughing_elion[106916]:     "0": [
Jan 26 09:44:48 compute-0 laughing_elion[106916]:         {
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "devices": [
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "/dev/loop3"
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             ],
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "lv_name": "ceph_lv0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "lv_size": "21470642176",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "name": "ceph_lv0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "tags": {
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.cluster_name": "ceph",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.crush_device_class": "",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.encrypted": "0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.osd_id": "0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.type": "block",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.vdo": "0",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:                 "ceph.with_tpm": "0"
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             },
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "type": "block",
Jan 26 09:44:48 compute-0 laughing_elion[106916]:             "vg_name": "ceph_vg0"
Jan 26 09:44:48 compute-0 laughing_elion[106916]:         }
Jan 26 09:44:48 compute-0 laughing_elion[106916]:     ]
Jan 26 09:44:48 compute-0 laughing_elion[106916]: }
Jan 26 09:44:48 compute-0 systemd[1]: libpod-897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7.scope: Deactivated successfully.
Jan 26 09:44:48 compute-0 podman[106898]: 2026-01-26 09:44:48.676061076 +0000 UTC m=+0.467274918 container died 897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_elion, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 09:44:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:44:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec840a8dedf67ba6944bcea1832e7e2c7274aba4a54bf0c312ff3cc5a4d4432f-merged.mount: Deactivated successfully.
Jan 26 09:44:48 compute-0 podman[106898]: 2026-01-26 09:44:48.737662097 +0000 UTC m=+0.528875939 container remove 897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:44:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:44:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:44:48 compute-0 systemd[1]: libpod-conmon-897eb376e6440218caf4493b89c90459d07f96f385bf2a6f4cb008ffe249edd7.scope: Deactivated successfully.
Jan 26 09:44:48 compute-0 python3.9[106998]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:44:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:44:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:44:48 compute-0 sudo[106994]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:48 compute-0 sudo[106693]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:48 compute-0 ceph-mon[74456]: osdmap e111: 3 total, 3 up, 3 in
Jan 26 09:44:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:44:48 compute-0 sudo[107016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:44:48 compute-0 sudo[107016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:48 compute-0 sudo[107016]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:48 compute-0 sudo[107065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:44:48 compute-0 sudo[107065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:49 compute-0 podman[107207]: 2026-01-26 09:44:49.445500768 +0000 UTC m=+0.064509437 container create 688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:44:49 compute-0 systemd[1]: Started libpod-conmon-688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95.scope.
Jan 26 09:44:49 compute-0 podman[107207]: 2026-01-26 09:44:49.414450285 +0000 UTC m=+0.033459024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:49 compute-0 podman[107207]: 2026-01-26 09:44:49.550530792 +0000 UTC m=+0.169539471 container init 688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:44:49 compute-0 podman[107207]: 2026-01-26 09:44:49.56078181 +0000 UTC m=+0.179790519 container start 688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:49 compute-0 naughty_wright[107248]: 167 167
Jan 26 09:44:49 compute-0 podman[107207]: 2026-01-26 09:44:49.567252038 +0000 UTC m=+0.186260697 container attach 688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wright, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 09:44:49 compute-0 systemd[1]: libpod-688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95.scope: Deactivated successfully.
Jan 26 09:44:49 compute-0 podman[107207]: 2026-01-26 09:44:49.573761088 +0000 UTC m=+0.192769767 container died 688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:44:49 compute-0 sudo[107279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoeejcgknlavnhdkboqdsshxauouiqqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420689.0838373-157-185997384862838/AnsiballZ_file.py'
Jan 26 09:44:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-34e377660c295938a7d378af6602a545571f1ee6696aa2c81dd2441f1d4c69f5-merged.mount: Deactivated successfully.
Jan 26 09:44:49 compute-0 sudo[107279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:44:49 compute-0 podman[107207]: 2026-01-26 09:44:49.611032622 +0000 UTC m=+0.230041281 container remove 688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:44:49 compute-0 systemd[1]: libpod-conmon-688a4e82301a1dc0c501c2d34f46a1069fd9be09132cd6e66b95c15ff0086a95.scope: Deactivated successfully.
Jan 26 09:44:49 compute-0 podman[107303]: 2026-01-26 09:44:49.810747278 +0000 UTC m=+0.056039930 container create 07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 09:44:49 compute-0 ceph-mon[74456]: pgmap v38: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:49 compute-0 systemd[1]: Started libpod-conmon-07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc.scope.
Jan 26 09:44:49 compute-0 python3.9[107288]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:44:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:44:49 compute-0 podman[107303]: 2026-01-26 09:44:49.789046367 +0000 UTC m=+0.034339049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:44:49 compute-0 sudo[107279]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44ac153897f2c44e49c2c1bafca7e8e3b562750f4ede40dada982078982f8d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44ac153897f2c44e49c2c1bafca7e8e3b562750f4ede40dada982078982f8d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44ac153897f2c44e49c2c1bafca7e8e3b562750f4ede40dada982078982f8d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44ac153897f2c44e49c2c1bafca7e8e3b562750f4ede40dada982078982f8d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:44:49 compute-0 podman[107303]: 2026-01-26 09:44:49.898132789 +0000 UTC m=+0.143425461 container init 07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:44:49 compute-0 podman[107303]: 2026-01-26 09:44:49.909597263 +0000 UTC m=+0.154889935 container start 07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:44:49 compute-0 podman[107303]: 2026-01-26 09:44:49.912809246 +0000 UTC m=+0.158101888 container attach 07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:44:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:50.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:50 compute-0 sudo[107511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvayvqmshywrxfpjbsmnjszjdclzkfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420690.0872595-184-163592509266104/AnsiballZ_file.py'
Jan 26 09:44:50 compute-0 sudo[107511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:44:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:50.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:50 compute-0 python3.9[107519]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:44:50 compute-0 sudo[107511]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 26 09:44:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 26 09:44:50 compute-0 lvm[107572]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:44:50 compute-0 lvm[107572]: VG ceph_vg0 finished
Jan 26 09:44:50 compute-0 happy_antonelli[107319]: {}
Jan 26 09:44:50 compute-0 systemd[1]: libpod-07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc.scope: Deactivated successfully.
Jan 26 09:44:50 compute-0 podman[107303]: 2026-01-26 09:44:50.693772174 +0000 UTC m=+0.939064866 container died 07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:44:50 compute-0 systemd[1]: libpod-07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc.scope: Consumed 1.134s CPU time.
Jan 26 09:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44ac153897f2c44e49c2c1bafca7e8e3b562750f4ede40dada982078982f8d9-merged.mount: Deactivated successfully.
Jan 26 09:44:50 compute-0 podman[107303]: 2026-01-26 09:44:50.752559173 +0000 UTC m=+0.997851835 container remove 07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_antonelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:44:50 compute-0 systemd[1]: libpod-conmon-07942692e4edc56479db67ac680a47bf1cd629a8e3a0f5c1a8182b94be73dadc.scope: Deactivated successfully.
Jan 26 09:44:50 compute-0 sudo[107065]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:44:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 26 09:44:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:44:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 26 09:44:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 26 09:44:50 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 26 09:44:50 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 26 09:44:50 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:50 compute-0 sudo[107641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:44:50 compute-0 sudo[107641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:50 compute-0 sudo[107641]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:51 compute-0 python3.9[107739]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:44:51 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 26 09:44:51 compute-0 ceph-osd[82841]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 26 09:44:51 compute-0 network[107756]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:44:51 compute-0 network[107757]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:44:51 compute-0 network[107758]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:44:51 compute-0 ceph-mon[74456]: pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:51 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 26 09:44:51 compute-0 ceph-mon[74456]: osdmap e112: 3 total, 3 up, 3 in
Jan 26 09:44:51 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:44:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:52.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:52.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 26 09:44:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 26 09:44:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 26 09:44:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 26 09:44:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 26 09:44:52 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 26 09:44:52 compute-0 ceph-mon[74456]: 9.1c scrub starts
Jan 26 09:44:52 compute-0 ceph-mon[74456]: 9.1c scrub ok
Jan 26 09:44:52 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 26 09:44:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094453 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:44:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 26 09:44:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 26 09:44:53 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 26 09:44:53 compute-0 ceph-mon[74456]: pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 26 09:44:53 compute-0 ceph-mon[74456]: osdmap e113: 3 total, 3 up, 3 in
Jan 26 09:44:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:54.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:54.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 26 09:44:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 26 09:44:54 compute-0 sshd-session[107851]: Invalid user admin from 157.245.76.178 port 55194
Jan 26 09:44:54 compute-0 sudo[107859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:44:54 compute-0 sudo[107859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:44:54 compute-0 sudo[107859]: pam_unix(sudo:session): session closed for user root
Jan 26 09:44:54 compute-0 sshd-session[107851]: Connection closed by invalid user admin 157.245.76.178 port 55194 [preauth]
Jan 26 09:44:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 26 09:44:54 compute-0 ceph-mon[74456]: osdmap e114: 3 total, 3 up, 3 in
Jan 26 09:44:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 26 09:44:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 26 09:44:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 26 09:44:54 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 26 09:44:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 26 09:44:55 compute-0 ceph-mon[74456]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 26 09:44:55 compute-0 ceph-mon[74456]: osdmap e115: 3 total, 3 up, 3 in
Jan 26 09:44:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 26 09:44:55 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 26 09:44:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:56.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:44:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:56.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:44:56 compute-0 python3.9[108051]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:44:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:56] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:44:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:44:56] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:44:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 26 09:44:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 26 09:44:56 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 26 09:44:56 compute-0 ceph-mon[74456]: osdmap e116: 3 total, 3 up, 3 in
Jan 26 09:44:57 compute-0 python3.9[108201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:44:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:44:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 26 09:44:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 26 09:44:57 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 26 09:44:58 compute-0 ceph-mon[74456]: pgmap v47: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:58 compute-0 ceph-mon[74456]: osdmap e117: 3 total, 3 up, 3 in
Jan 26 09:44:58 compute-0 ceph-mon[74456]: osdmap e118: 3 total, 3 up, 3 in
Jan 26 09:44:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:44:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:44:58.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:44:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:44:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:44:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:44:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 09:44:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:44:58.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 09:44:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:58 compute-0 python3.9[108357]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:44:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 26 09:44:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 26 09:44:58 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 26 09:44:59 compute-0 sudo[108513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egqfxuhbzrjckzhtxvzguiixnyprbjra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420699.1697195-328-208886599758302/AnsiballZ_setup.py'
Jan 26 09:44:59 compute-0 sudo[108513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:44:59 compute-0 ceph-mon[74456]: pgmap v50: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:44:59 compute-0 ceph-mon[74456]: osdmap e119: 3 total, 3 up, 3 in
Jan 26 09:44:59 compute-0 python3.9[108515]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:45:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:00 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:00 compute-0 sudo[108513]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:00 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:00 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:00 compute-0 sudo[108599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnonuqsjiostoflfcjjlhkxpcbeafadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420699.1697195-328-208886599758302/AnsiballZ_dnf.py'
Jan 26 09:45:00 compute-0 sudo[108599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:00.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 447 B/s rd, 0 op/s; 48 B/s, 1 objects/s recovering
Jan 26 09:45:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 26 09:45:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 26 09:45:00 compute-0 python3.9[108601]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:45:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 26 09:45:00 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 26 09:45:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 26 09:45:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 26 09:45:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 26 09:45:01 compute-0 ceph-mon[74456]: pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 447 B/s rd, 0 op/s; 48 B/s, 1 objects/s recovering
Jan 26 09:45:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 26 09:45:01 compute-0 ceph-mon[74456]: osdmap e120: 3 total, 3 up, 3 in
Jan 26 09:45:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:02 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:02 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:02 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:02.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 367 B/s rd, 0 op/s; 39 B/s, 1 objects/s recovering
Jan 26 09:45:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 26 09:45:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 26 09:45:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:45:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 26 09:45:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 26 09:45:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 26 09:45:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 26 09:45:02 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 26 09:45:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:45:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:03 compute-0 ceph-mon[74456]: pgmap v54: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 367 B/s rd, 0 op/s; 39 B/s, 1 objects/s recovering
Jan 26 09:45:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 26 09:45:03 compute-0 ceph-mon[74456]: osdmap e121: 3 total, 3 up, 3 in
Jan 26 09:45:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:04 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:04 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 09:45:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:04.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 09:45:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:04 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:45:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:04.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:45:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Jan 26 09:45:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 26 09:45:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 26 09:45:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 26 09:45:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 26 09:45:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 26 09:45:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 26 09:45:04 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 26 09:45:04 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 122 pg[9.19( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=87/87 les/c/f=88/88/0 sis=122) [0] r=0 lpr=122 pi=[87,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 26 09:45:05 compute-0 ceph-mon[74456]: pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Jan 26 09:45:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 26 09:45:05 compute-0 ceph-mon[74456]: osdmap e122: 3 total, 3 up, 3 in
Jan 26 09:45:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 26 09:45:05 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 26 09:45:05 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 123 pg[9.19( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=87/87 les/c/f=88/88/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[87,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:05 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 123 pg[9.19( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=87/87 les/c/f=88/88/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[87,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 09:45:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:06 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:06 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:06.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:06 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:06.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 181 B/s rd, 0 op/s
Jan 26 09:45:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 26 09:45:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 26 09:45:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:06] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Jan 26 09:45:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:06] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Jan 26 09:45:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 26 09:45:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 26 09:45:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 26 09:45:07 compute-0 ceph-mon[74456]: osdmap e123: 3 total, 3 up, 3 in
Jan 26 09:45:07 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 26 09:45:07 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 26 09:45:07 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 124 pg[9.1a( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=89/89 les/c/f=90/90/0 sis=124) [0] r=0 lpr=124 pi=[89,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 09:45:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 26 09:45:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 26 09:45:07 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 26 09:45:07 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 125 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=7 ec=63/48 lis/c=123/87 les/c/f=124/88/0 sis=125) [0] r=0 lpr=125 pi=[87,125)/1 luod=0'0 crt=60'1159 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:07 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 125 pg[9.1a( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=89/89 les/c/f=90/90/0 sis=125) [0]/[1] r=-1 lpr=125 pi=[89,125)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:07 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 125 pg[9.1a( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=89/89 les/c/f=90/90/0 sis=125) [0]/[1] r=-1 lpr=125 pi=[89,125)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 09:45:07 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 125 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=7 ec=63/48 lis/c=123/87 les/c/f=124/88/0 sis=125) [0] r=0 lpr=125 pi=[87,125)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:08 compute-0 ceph-mon[74456]: pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 181 B/s rd, 0 op/s
Jan 26 09:45:08 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 26 09:45:08 compute-0 ceph-mon[74456]: osdmap e124: 3 total, 3 up, 3 in
Jan 26 09:45:08 compute-0 ceph-mon[74456]: osdmap e125: 3 total, 3 up, 3 in
Jan 26 09:45:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:08 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:08 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:08 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:08.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:08.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:45:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 26 09:45:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 26 09:45:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 26 09:45:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 26 09:45:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 26 09:45:08 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 26 09:45:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 126 pg[9.1b( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=75/75 les/c/f=76/76/0 sis=126) [0] r=0 lpr=126 pi=[75,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:08 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 126 pg[9.19( v 60'1159 (0'0,60'1159] local-lis/les=125/126 n=7 ec=63/48 lis/c=123/87 les/c/f=124/88/0 sis=125) [0] r=0 lpr=125 pi=[87,125)/1 crt=60'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:45:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 26 09:45:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 26 09:45:09 compute-0 ceph-mon[74456]: osdmap e126: 3 total, 3 up, 3 in
Jan 26 09:45:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 26 09:45:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 26 09:45:09 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 26 09:45:09 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 127 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=4 ec=63/48 lis/c=125/89 les/c/f=126/90/0 sis=127) [0] r=0 lpr=127 pi=[89,127)/1 luod=0'0 crt=60'1159 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:09 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 127 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=4 ec=63/48 lis/c=125/89 les/c/f=126/90/0 sis=127) [0] r=0 lpr=127 pi=[89,127)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:09 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 127 pg[9.1b( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=75/75 les/c/f=76/76/0 sis=127) [0]/[2] r=-1 lpr=127 pi=[75,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:09 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 127 pg[9.1b( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=75/75 les/c/f=76/76/0 sis=127) [0]/[2] r=-1 lpr=127 pi=[75,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 09:45:10 compute-0 ceph-mon[74456]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:45:10 compute-0 ceph-mon[74456]: osdmap e127: 3 total, 3 up, 3 in
Jan 26 09:45:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:10 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:10 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:10 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb914004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:10.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:45:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:10.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:45:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Jan 26 09:45:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 26 09:45:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 26 09:45:10 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 26 09:45:10 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 128 pg[9.1a( v 60'1159 (0'0,60'1159] local-lis/les=127/128 n=4 ec=63/48 lis/c=125/89 les/c/f=126/90/0 sis=127) [0] r=0 lpr=127 pi=[89,127)/1 crt=60'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:45:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 26 09:45:11 compute-0 ceph-mon[74456]: pgmap v65: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Jan 26 09:45:11 compute-0 ceph-mon[74456]: osdmap e128: 3 total, 3 up, 3 in
Jan 26 09:45:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 26 09:45:11 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 26 09:45:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 129 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=2 ec=63/48 lis/c=127/75 les/c/f=128/76/0 sis=129) [0] r=0 lpr=129 pi=[75,129)/1 luod=0'0 crt=60'1159 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:11 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 129 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=2 ec=63/48 lis/c=127/75 les/c/f=128/76/0 sis=129) [0] r=0 lpr=129 pi=[75,129)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003af0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:12 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:12.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 09:45:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:12.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 09:45:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Jan 26 09:45:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 26 09:45:12 compute-0 ceph-mon[74456]: osdmap e129: 3 total, 3 up, 3 in
Jan 26 09:45:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 26 09:45:13 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 26 09:45:13 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 130 pg[9.1b( v 60'1159 (0'0,60'1159] local-lis/les=129/130 n=2 ec=63/48 lis/c=127/75 les/c/f=128/76/0 sis=129) [0] r=0 lpr=129 pi=[75,129)/1 crt=60'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:45:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:14 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:14 compute-0 ceph-mon[74456]: pgmap v68: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Jan 26 09:45:14 compute-0 ceph-mon[74456]: osdmap e130: 3 total, 3 up, 3 in
Jan 26 09:45:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:14 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:14 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:14.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:14.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:45:14 compute-0 sudo[108694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:45:14 compute-0 sudo[108694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:14 compute-0 sudo[108694]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:15 compute-0 ceph-mon[74456]: pgmap v70: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:45:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:16 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:16 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:16 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003b30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:16.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 26 09:45:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 26 09:45:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 26 09:45:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 26 09:45:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:16] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Jan 26 09:45:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:16] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Jan 26 09:45:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 26 09:45:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 26 09:45:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 26 09:45:16 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 26 09:45:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:17 compute-0 ceph-mon[74456]: pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 26 09:45:17 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 26 09:45:17 compute-0 ceph-mon[74456]: osdmap e131: 3 total, 3 up, 3 in
Jan 26 09:45:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:18 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:18 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8ec003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:18 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:18.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 148 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Jan 26 09:45:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 26 09:45:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:45:18
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', '.mgr']
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:45:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:45:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 26 09:45:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 26 09:45:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:45:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 26 09:45:18 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:45:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:45:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 26 09:45:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 26 09:45:19 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 26 09:45:19 compute-0 ceph-mon[74456]: pgmap v73: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 148 B/s rd, 0 op/s; 15 B/s, 0 objects/s recovering
Jan 26 09:45:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 26 09:45:19 compute-0 ceph-mon[74456]: osdmap e132: 3 total, 3 up, 3 in
Jan 26 09:45:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:20 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003be0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:20 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:20 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0008d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:20.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:20.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v76: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 26 09:45:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 26 09:45:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 26 09:45:20 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 26 09:45:20 compute-0 ceph-mon[74456]: osdmap e133: 3 total, 3 up, 3 in
Jan 26 09:45:20 compute-0 ceph-mon[74456]: osdmap e134: 3 total, 3 up, 3 in
Jan 26 09:45:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 26 09:45:21 compute-0 ceph-mon[74456]: pgmap v76: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 26 09:45:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 26 09:45:21 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 26 09:45:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003c00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:22 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:22.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:22.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:45:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 26 09:45:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 26 09:45:22 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 26 09:45:22 compute-0 ceph-mon[74456]: osdmap e135: 3 total, 3 up, 3 in
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.944636) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420722944689, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2962, "num_deletes": 251, "total_data_size": 6646235, "memory_usage": 6872816, "flush_reason": "Manual Compaction"}
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420722984409, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6284817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7846, "largest_seqno": 10807, "table_properties": {"data_size": 6270463, "index_size": 9248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4037, "raw_key_size": 36445, "raw_average_key_size": 22, "raw_value_size": 6239048, "raw_average_value_size": 3914, "num_data_blocks": 402, "num_entries": 1594, "num_filter_entries": 1594, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420622, "oldest_key_time": 1769420622, "file_creation_time": 1769420722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 39812 microseconds, and 10665 cpu microseconds.
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.984454) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6284817 bytes OK
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.984472) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.986100) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.986116) EVENT_LOG_v1 {"time_micros": 1769420722986112, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.986132) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 6632378, prev total WAL file size 6632378, number of live WAL files 2.
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.987417) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6137KB)], [23(10MB)]
Jan 26 09:45:22 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420722987484, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 17228014, "oldest_snapshot_seqno": -1}
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4053 keys, 14806594 bytes, temperature: kUnknown
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420723061698, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14806594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14774111, "index_size": 21237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 103464, "raw_average_key_size": 25, "raw_value_size": 14694529, "raw_average_value_size": 3625, "num_data_blocks": 914, "num_entries": 4053, "num_filter_entries": 4053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769420722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.061960) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14806594 bytes
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.063349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.9 rd, 199.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.0, 10.4 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(5.1) write-amplify(2.4) OK, records in: 4591, records dropped: 538 output_compression: NoCompression
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.063371) EVENT_LOG_v1 {"time_micros": 1769420723063360, "job": 8, "event": "compaction_finished", "compaction_time_micros": 74295, "compaction_time_cpu_micros": 30120, "output_level": 6, "num_output_files": 1, "total_output_size": 14806594, "num_input_records": 4591, "num_output_records": 4053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420723064626, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420723066722, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:22.987309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.066811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.066818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.066820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.066822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:45:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:45:23.066824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:45:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=infra.usagestats t=2026-01-26T09:45:23.407773531Z level=info msg="Usage stats are ready to report"
Jan 26 09:45:23 compute-0 ceph-mon[74456]: pgmap v79: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:45:23 compute-0 ceph-mon[74456]: osdmap e136: 3 total, 3 up, 3 in
Jan 26 09:45:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc0008d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:24 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f0003c20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:24.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v81: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:45:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094524 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:45:25 compute-0 ceph-mon[74456]: pgmap v81: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:45:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:26 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40034f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:26.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:26.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 36 B/s, 0 objects/s recovering
Jan 26 09:45:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 26 09:45:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 26 09:45:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:26] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Jan 26 09:45:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:26] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Jan 26 09:45:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 26 09:45:27 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 26 09:45:27 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 26 09:45:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 26 09:45:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 26 09:45:27 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 137 pg[9.1e( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=79/79 les/c/f=80/80/0 sis=137) [0] r=0 lpr=137 pi=[79,137)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 26 09:45:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 26 09:45:27 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 26 09:45:27 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 138 pg[9.1e( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=79/79 les/c/f=80/80/0 sis=138) [0]/[1] r=-1 lpr=138 pi=[79,138)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:27 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 138 pg[9.1e( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=79/79 les/c/f=80/80/0 sis=138) [0]/[1] r=-1 lpr=138 pi=[79,138)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 09:45:28 compute-0 ceph-mon[74456]: pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 36 B/s, 0 objects/s recovering
Jan 26 09:45:28 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 26 09:45:28 compute-0 ceph-mon[74456]: osdmap e137: 3 total, 3 up, 3 in
Jan 26 09:45:28 compute-0 ceph-mon[74456]: osdmap e138: 3 total, 3 up, 3 in
Jan 26 09:45:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40034f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:28 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:28.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 36 B/s, 0 objects/s recovering
Jan 26 09:45:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 26 09:45:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:45:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 26 09:45:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:45:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 26 09:45:28 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 26 09:45:28 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 139 pg[9.1f( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=101/101 les/c/f=102/102/0 sis=139) [0] r=0 lpr=139 pi=[101,139)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 09:45:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 09:45:29 compute-0 ceph-mon[74456]: osdmap e139: 3 total, 3 up, 3 in
Jan 26 09:45:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 26 09:45:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 26 09:45:29 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 26 09:45:29 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 140 pg[9.1f( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=101/101 les/c/f=102/102/0 sis=140) [0]/[1] r=-1 lpr=140 pi=[101,140)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:29 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 140 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=5 ec=63/48 lis/c=138/79 les/c/f=139/80/0 sis=140) [0] r=0 lpr=140 pi=[79,140)/1 luod=0'0 crt=60'1159 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:29 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 140 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=5 ec=63/48 lis/c=138/79 les/c/f=139/80/0 sis=140) [0] r=0 lpr=140 pi=[79,140)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:29 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 140 pg[9.1f( empty local-lis/les=0/0 n=0 ec=63/48 lis/c=101/101 les/c/f=102/102/0 sis=140) [0]/[1] r=-1 lpr=140 pi=[101,140)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 09:45:30 compute-0 ceph-mon[74456]: pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 36 B/s, 0 objects/s recovering
Jan 26 09:45:30 compute-0 ceph-mon[74456]: osdmap e140: 3 total, 3 up, 3 in
Jan 26 09:45:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:30 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:45:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 26 09:45:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 26 09:45:31 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 26 09:45:31 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 141 pg[9.1e( v 60'1159 (0'0,60'1159] local-lis/les=140/141 n=5 ec=63/48 lis/c=138/79 les/c/f=139/80/0 sis=140) [0] r=0 lpr=140 pi=[79,140)/1 crt=60'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:45:31 compute-0 ceph-mon[74456]: pgmap v88: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:45:31 compute-0 ceph-mon[74456]: osdmap e141: 3 total, 3 up, 3 in
Jan 26 09:45:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:32 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:32.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 26 09:45:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 26 09:45:32 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 26 09:45:32 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 142 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=5 ec=63/48 lis/c=140/101 les/c/f=141/102/0 sis=142) [0] r=0 lpr=142 pi=[101,142)/1 luod=0'0 crt=60'1159 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 26 09:45:32 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 142 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=0/0 n=5 ec=63/48 lis/c=140/101 les/c/f=141/102/0 sis=142) [0] r=0 lpr=142 pi=[101,142)/1 crt=60'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 09:45:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:45:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 26 09:45:33 compute-0 ceph-mon[74456]: osdmap e142: 3 total, 3 up, 3 in
Jan 26 09:45:33 compute-0 ceph-mon[74456]: pgmap v91: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:45:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:45:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 26 09:45:33 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 26 09:45:33 compute-0 ceph-osd[82841]: osd.0 pg_epoch: 143 pg[9.1f( v 60'1159 (0'0,60'1159] local-lis/les=142/143 n=5 ec=63/48 lis/c=140/101 les/c/f=141/102/0 sis=142) [0] r=0 lpr=142 pi=[101,142)/1 crt=60'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 09:45:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002d60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40034f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:34.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:45:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:34 compute-0 ceph-mon[74456]: osdmap e143: 3 total, 3 up, 3 in
Jan 26 09:45:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:34 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:45:34 compute-0 sudo[108812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:45:34 compute-0 sudo[108812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:34 compute-0 sudo[108812]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:35 compute-0 ceph-mon[74456]: pgmap v93: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Jan 26 09:45:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc001f60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002d60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:36 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e40034f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:36.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:36 compute-0 sshd-session[108837]: Invalid user admin from 157.245.76.178 port 34168
Jan 26 09:45:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:36.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:36 compute-0 sshd-session[108837]: Connection closed by invalid user admin 157.245.76.178 port 34168 [preauth]
Jan 26 09:45:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s; 36 B/s, 1 objects/s recovering
Jan 26 09:45:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:36] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:45:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:36] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:45:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:37 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:45:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:37 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:45:37 compute-0 ceph-mon[74456]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s; 36 B/s, 1 objects/s recovering
Jan 26 09:45:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:38 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:38.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:38.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 830 B/s rd, 138 B/s wr, 1 op/s; 29 B/s, 1 objects/s recovering
Jan 26 09:45:39 compute-0 ceph-mon[74456]: pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 830 B/s rd, 138 B/s wr, 1 op/s; 29 B/s, 1 objects/s recovering
Jan 26 09:45:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f4002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:40.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:40.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s; 26 B/s, 1 objects/s recovering
Jan 26 09:45:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:40 : epoch 6977372f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:45:41 compute-0 ceph-mon[74456]: pgmap v96: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s; 26 B/s, 1 objects/s recovering
Jan 26 09:45:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:42 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:42.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:42.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 21 B/s, 0 objects/s recovering
Jan 26 09:45:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:43 compute-0 sudo[108599]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:43 compute-0 sudo[108996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwelvmcpguvqltpbcatpuzbyajuhkor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420743.5803456-364-235737345000974/AnsiballZ_command.py'
Jan 26 09:45:43 compute-0 sudo[108996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:43 compute-0 ceph-mon[74456]: pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 21 B/s, 0 objects/s recovering
Jan 26 09:45:44 compute-0 python3.9[108998]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:45:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:44 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4004020 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:44.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:44.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.0 KiB/s wr, 3 op/s; 20 B/s, 0 objects/s recovering
Jan 26 09:45:44 compute-0 sudo[108996]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:45 compute-0 sudo[109285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luzlcmgxosqkruqwokqokuebdhikceje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420745.132728-388-238482223505997/AnsiballZ_selinux.py'
Jan 26 09:45:45 compute-0 sudo[109285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:46 compute-0 ceph-mon[74456]: pgmap v98: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.0 KiB/s wr, 3 op/s; 20 B/s, 0 objects/s recovering
Jan 26 09:45:46 compute-0 python3.9[109287]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 26 09:45:46 compute-0 sudo[109285]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:46 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:46.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:46.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s; 18 B/s, 0 objects/s recovering
Jan 26 09:45:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:46] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:45:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:46] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:45:46 compute-0 sudo[109439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vznrrusfcgbyrzkdlgyirxauojvuvpro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420746.5466838-421-112402204747181/AnsiballZ_command.py'
Jan 26 09:45:46 compute-0 sudo[109439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094546 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:45:46 compute-0 python3.9[109441]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 26 09:45:46 compute-0 sudo[109439]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:47 compute-0 sudo[109591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqgjzrijopzafwouicjagnczlagxbke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420747.3106287-445-6864424507445/AnsiballZ_file.py'
Jan 26 09:45:47 compute-0 sudo[109591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:47 compute-0 python3.9[109593]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:45:47 compute-0 sudo[109591]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:48 compute-0 ceph-mon[74456]: pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s; 18 B/s, 0 objects/s recovering
Jan 26 09:45:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4004020 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:48 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:48 compute-0 sudo[109745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifrooahsjhuodwzersaoqzsnvpcwjatv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420747.9961147-469-90510494524136/AnsiballZ_mount.py'
Jan 26 09:45:48 compute-0 sudo[109745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:48.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:45:48 compute-0 python3.9[109747]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 26 09:45:48 compute-0 sudo[109745]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:45:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7ff0c7dce610>)]
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7ff0c7dce4f0>)]
Jan 26 09:45:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 26 09:45:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:45:49 compute-0 sudo[109897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzgnpiulxvypdqymytgqzlwqrfqmpkii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420749.6645675-553-41571855589985/AnsiballZ_file.py'
Jan 26 09:45:49 compute-0 sudo[109897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:50 compute-0 ceph-mon[74456]: pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:45:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:50 compute-0 python3.9[109899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:45:50 compute-0 sudo[109897]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:50 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:50.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:50.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 26 09:45:50 compute-0 sudo[110051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjmrnjkngbnuuqjtsvuvamdaaogtkhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420750.370086-577-221778279962486/AnsiballZ_stat.py'
Jan 26 09:45:50 compute-0 sudo[110051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:50 compute-0 python3.9[110053]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:45:50 compute-0 sudo[110051]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:51 compute-0 sudo[110129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orzwexdbnkczxzisnmuqtqbzpwhlktlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420750.370086-577-221778279962486/AnsiballZ_file.py'
Jan 26 09:45:51 compute-0 sudo[110129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:51 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.zllcia(active, since 92s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:45:51 compute-0 sudo[110132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:45:51 compute-0 sudo[110132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:51 compute-0 sudo[110132]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:51 compute-0 sudo[110157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:45:51 compute-0 sudo[110157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:51 compute-0 python3.9[110131]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:45:51 compute-0 sudo[110129]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:51 compute-0 podman[110279]: 2026-01-26 09:45:51.862796142 +0000 UTC m=+0.079532588 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:45:51 compute-0 podman[110279]: 2026-01-26 09:45:51.990659534 +0000 UTC m=+0.207395970 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:45:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:52 compute-0 ceph-mon[74456]: pgmap v101: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 26 09:45:52 compute-0 ceph-mon[74456]: mgrmap e32: compute-0.zllcia(active, since 92s), standbys: compute-1.xammti, compute-2.oynaeu
Jan 26 09:45:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:52 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:52 compute-0 podman[110473]: 2026-01-26 09:45:52.416523656 +0000 UTC m=+0.048135499 container exec 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:45:52 compute-0 podman[110473]: 2026-01-26 09:45:52.42544921 +0000 UTC m=+0.057061033 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:45:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:45:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:52.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:45:52 compute-0 sudo[110566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfrnxrgtguypcxsielypxmelaodusrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420752.2344375-640-33341810836406/AnsiballZ_stat.py'
Jan 26 09:45:52 compute-0 sudo[110566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Jan 26 09:45:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:52 compute-0 python3.9[110577]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:45:52 compute-0 sudo[110566]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:52 compute-0 podman[110619]: 2026-01-26 09:45:52.873732416 +0000 UTC m=+0.151200121 container exec d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:45:52 compute-0 podman[110619]: 2026-01-26 09:45:52.88411248 +0000 UTC m=+0.161580165 container exec_died d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:45:53 compute-0 podman[110710]: 2026-01-26 09:45:53.072144759 +0000 UTC m=+0.043445671 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:45:53 compute-0 podman[110710]: 2026-01-26 09:45:53.087580801 +0000 UTC m=+0.058881693 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:45:53 compute-0 ceph-mon[74456]: pgmap v102: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Jan 26 09:45:53 compute-0 podman[110777]: 2026-01-26 09:45:53.265458242 +0000 UTC m=+0.052797986 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, version=2.2.4, release=1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Jan 26 09:45:53 compute-0 podman[110777]: 2026-01-26 09:45:53.279576289 +0000 UTC m=+0.066916013 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc.)
Jan 26 09:45:53 compute-0 podman[110873]: 2026-01-26 09:45:53.482732682 +0000 UTC m=+0.055443229 container exec c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:45:53 compute-0 podman[110873]: 2026-01-26 09:45:53.513302019 +0000 UTC m=+0.086012586 container exec_died c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:45:53 compute-0 podman[110974]: 2026-01-26 09:45:53.730149547 +0000 UTC m=+0.056290992 container exec ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:45:53 compute-0 podman[110974]: 2026-01-26 09:45:53.888788301 +0000 UTC m=+0.214929786 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:45:53 compute-0 sudo[111074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhpvjkqfgxpaggyengcdehgxtwqzcfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420753.379473-679-210171381281813/AnsiballZ_getent.py'
Jan 26 09:45:53 compute-0 sudo[111074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:54 compute-0 python3.9[111077]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 26 09:45:54 compute-0 sudo[111074]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:54 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8e4004020 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:54.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:54 compute-0 podman[111184]: 2026-01-26 09:45:54.35872082 +0000 UTC m=+0.076164116 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:45:54 compute-0 podman[111184]: 2026-01-26 09:45:54.442581526 +0000 UTC m=+0.160024772 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:45:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:45:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:54.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:45:54 compute-0 sudo[110157]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:45:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:45:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Jan 26 09:45:54 compute-0 sudo[111279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:45:54 compute-0 sudo[111279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:54 compute-0 sudo[111279]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:54 compute-0 sudo[111327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:45:54 compute-0 sudo[111327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:54 compute-0 sudo[111399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuiebkdmqbupxynwjrcwvvicppaosxeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420754.421846-709-28763918703811/AnsiballZ_getent.py'
Jan 26 09:45:54 compute-0 sudo[111399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:54 compute-0 python3.9[111401]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 26 09:45:54 compute-0 sudo[111399]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:55 compute-0 sudo[111444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:45:55 compute-0 sudo[111444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:55 compute-0 sudo[111444]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:55 compute-0 sudo[111327]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:45:55 compute-0 sudo[111535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:45:55 compute-0 sudo[111535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:55 compute-0 sudo[111535]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:55 compute-0 sudo[111560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:45:55 compute-0 sudo[111560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:55 compute-0 ceph-mon[74456]: pgmap v103: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 341 B/s wr, 0 op/s
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:45:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:45:55 compute-0 podman[111649]: 2026-01-26 09:45:55.779180427 +0000 UTC m=+0.053592539 container create de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_ellis, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:45:55 compute-0 systemd[1]: Started libpod-conmon-de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c.scope.
Jan 26 09:45:55 compute-0 podman[111649]: 2026-01-26 09:45:55.756747643 +0000 UTC m=+0.031159805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:45:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:45:55 compute-0 sudo[111717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwbwypdevrkqxbovvxgyacvvyiubbkgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420755.137675-733-210344455650979/AnsiballZ_group.py'
Jan 26 09:45:55 compute-0 sudo[111717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:55 compute-0 podman[111649]: 2026-01-26 09:45:55.878700722 +0000 UTC m=+0.153112864 container init de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_ellis, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:45:55 compute-0 podman[111649]: 2026-01-26 09:45:55.886102475 +0000 UTC m=+0.160514587 container start de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:45:55 compute-0 podman[111649]: 2026-01-26 09:45:55.889618691 +0000 UTC m=+0.164030823 container attach de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_ellis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:45:55 compute-0 dazzling_ellis[111712]: 167 167
Jan 26 09:45:55 compute-0 systemd[1]: libpod-de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c.scope: Deactivated successfully.
Jan 26 09:45:55 compute-0 conmon[111712]: conmon de8c9b5ba46825cacf10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c.scope/container/memory.events
Jan 26 09:45:55 compute-0 podman[111649]: 2026-01-26 09:45:55.89395056 +0000 UTC m=+0.168362692 container died de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fdf4bf598690d24484516c0ca1e5515ea6901b5ef73c19d2a876cb1bc56054b-merged.mount: Deactivated successfully.
Jan 26 09:45:55 compute-0 podman[111649]: 2026-01-26 09:45:55.943663151 +0000 UTC m=+0.218075293 container remove de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_ellis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:45:55 compute-0 systemd[1]: libpod-conmon-de8c9b5ba46825cacf10345d8362a7087a08305a3470efa1ebd56bf783a81a1c.scope: Deactivated successfully.
Jan 26 09:45:56 compute-0 python3.9[111720]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 09:45:56 compute-0 sudo[111717]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:56 compute-0 podman[111742]: 2026-01-26 09:45:56.121337526 +0000 UTC m=+0.047941084 container create 8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_hodgkin, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:45:56 compute-0 systemd[1]: Started libpod-conmon-8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c.scope.
Jan 26 09:45:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:56 compute-0 podman[111742]: 2026-01-26 09:45:56.102550762 +0000 UTC m=+0.029154330 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:45:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898fe7db8f93c0619820ad6fa607fdbfd401452556ff865ae5bdd97cd98d41f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898fe7db8f93c0619820ad6fa607fdbfd401452556ff865ae5bdd97cd98d41f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898fe7db8f93c0619820ad6fa607fdbfd401452556ff865ae5bdd97cd98d41f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898fe7db8f93c0619820ad6fa607fdbfd401452556ff865ae5bdd97cd98d41f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/898fe7db8f93c0619820ad6fa607fdbfd401452556ff865ae5bdd97cd98d41f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:56 compute-0 podman[111742]: 2026-01-26 09:45:56.238979027 +0000 UTC m=+0.165582665 container init 8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:45:56 compute-0 podman[111742]: 2026-01-26 09:45:56.251937682 +0000 UTC m=+0.178541260 container start 8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 09:45:56 compute-0 podman[111742]: 2026-01-26 09:45:56.258316307 +0000 UTC m=+0.184919905 container attach 8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_hodgkin, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 09:45:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:56 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8f40047c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:56.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 26 09:45:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:56.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 26 09:45:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 341 B/s wr, 0 op/s
Jan 26 09:45:56 compute-0 competent_hodgkin[111781]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:45:56 compute-0 competent_hodgkin[111781]: --> All data devices are unavailable
Jan 26 09:45:56 compute-0 systemd[1]: libpod-8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c.scope: Deactivated successfully.
Jan 26 09:45:56 compute-0 podman[111742]: 2026-01-26 09:45:56.597756002 +0000 UTC m=+0.524359560 container died 8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:45:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-898fe7db8f93c0619820ad6fa607fdbfd401452556ff865ae5bdd97cd98d41f8-merged.mount: Deactivated successfully.
Jan 26 09:45:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:56] "GET /metrics HTTP/1.1" 200 48249 "" "Prometheus/2.51.0"
Jan 26 09:45:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:45:56] "GET /metrics HTTP/1.1" 200 48249 "" "Prometheus/2.51.0"
Jan 26 09:45:56 compute-0 sudo[111934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ansvrhexiksnhybvtrmtnyfinjlkoeol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420756.3694093-760-171516764334755/AnsiballZ_file.py'
Jan 26 09:45:56 compute-0 sudo[111934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:56 compute-0 podman[111742]: 2026-01-26 09:45:56.66161534 +0000 UTC m=+0.588218918 container remove 8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 09:45:56 compute-0 systemd[1]: libpod-conmon-8c1b8d8263646a626cdf578123c17815efea65ad9ed4b4239ca3c44f83833c1c.scope: Deactivated successfully.
Jan 26 09:45:56 compute-0 sudo[111560]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:56 compute-0 sudo[111940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:45:56 compute-0 sudo[111940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:56 compute-0 sudo[111940]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:56 compute-0 python3.9[111939]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 26 09:45:56 compute-0 sudo[111934]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:56 compute-0 sudo[111965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:45:56 compute-0 sudo[111965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:57 compute-0 podman[112056]: 2026-01-26 09:45:57.386295564 +0000 UTC m=+0.066320477 container create bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gould, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:45:57 compute-0 systemd[1]: Started libpod-conmon-bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb.scope.
Jan 26 09:45:57 compute-0 podman[112056]: 2026-01-26 09:45:57.363571192 +0000 UTC m=+0.043596075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:45:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:45:57 compute-0 podman[112056]: 2026-01-26 09:45:57.494671942 +0000 UTC m=+0.174696895 container init bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:45:57 compute-0 podman[112056]: 2026-01-26 09:45:57.508332367 +0000 UTC m=+0.188357270 container start bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:45:57 compute-0 podman[112056]: 2026-01-26 09:45:57.513014184 +0000 UTC m=+0.193039087 container attach bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gould, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 09:45:57 compute-0 cool_gould[112123]: 167 167
Jan 26 09:45:57 compute-0 systemd[1]: libpod-bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb.scope: Deactivated successfully.
Jan 26 09:45:57 compute-0 podman[112056]: 2026-01-26 09:45:57.517131037 +0000 UTC m=+0.197155930 container died bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2c0ed92546f4396a20a9b4935793bce9bf14e53495e220559810765f33ca05d-merged.mount: Deactivated successfully.
Jan 26 09:45:57 compute-0 podman[112056]: 2026-01-26 09:45:57.576622676 +0000 UTC m=+0.256647539 container remove bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:45:57 compute-0 systemd[1]: libpod-conmon-bcc3c979d255a213da38bc56b75ed1f83f735496d654aadc31aa4e4a2ec4eddb.scope: Deactivated successfully.
Jan 26 09:45:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:45:57 compute-0 sudo[112219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdkahqhjfayybvqwyreaclmbqlyovpmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420757.3524415-793-191632082531848/AnsiballZ_dnf.py'
Jan 26 09:45:57 compute-0 sudo[112219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:45:57 compute-0 podman[112224]: 2026-01-26 09:45:57.77177541 +0000 UTC m=+0.045773224 container create 6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:45:57 compute-0 systemd[1]: Started libpod-conmon-6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a.scope.
Jan 26 09:45:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:45:57 compute-0 podman[112224]: 2026-01-26 09:45:57.756692637 +0000 UTC m=+0.030690471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e1ca292bddf62c1f7c9ecd3ccbfa86309df9657b0295678477d07113e39b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e1ca292bddf62c1f7c9ecd3ccbfa86309df9657b0295678477d07113e39b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e1ca292bddf62c1f7c9ecd3ccbfa86309df9657b0295678477d07113e39b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e1ca292bddf62c1f7c9ecd3ccbfa86309df9657b0295678477d07113e39b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:57 compute-0 podman[112224]: 2026-01-26 09:45:57.880035975 +0000 UTC m=+0.154033839 container init 6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hamilton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Jan 26 09:45:57 compute-0 podman[112224]: 2026-01-26 09:45:57.893653968 +0000 UTC m=+0.167651802 container start 6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 09:45:57 compute-0 podman[112224]: 2026-01-26 09:45:57.899538749 +0000 UTC m=+0.173536653 container attach 6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hamilton, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:45:57 compute-0 python3.9[112226]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:45:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8fc003c40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:58 compute-0 ceph-mon[74456]: pgmap v104: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 341 B/s wr, 0 op/s
Jan 26 09:45:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]: {
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:     "0": [
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:         {
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "devices": [
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "/dev/loop3"
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             ],
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "lv_name": "ceph_lv0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "lv_size": "21470642176",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "name": "ceph_lv0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "tags": {
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.cluster_name": "ceph",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.crush_device_class": "",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.encrypted": "0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.osd_id": "0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.type": "block",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.vdo": "0",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:                 "ceph.with_tpm": "0"
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             },
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "type": "block",
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:             "vg_name": "ceph_vg0"
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:         }
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]:     ]
Jan 26 09:45:58 compute-0 unruffled_hamilton[112242]: }
Jan 26 09:45:58 compute-0 systemd[1]: libpod-6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a.scope: Deactivated successfully.
Jan 26 09:45:58 compute-0 podman[112224]: 2026-01-26 09:45:58.251537447 +0000 UTC m=+0.525535261 container died 6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hamilton, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 09:45:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:45:58 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47e1ca292bddf62c1f7c9ecd3ccbfa86309df9657b0295678477d07113e39b2-merged.mount: Deactivated successfully.
Jan 26 09:45:58 compute-0 podman[112224]: 2026-01-26 09:45:58.297021593 +0000 UTC m=+0.571019407 container remove 6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:45:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:45:58.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:58 compute-0 systemd[1]: libpod-conmon-6d84c8f1a00116c7231624c8ff01b797951d5a28f736ca18eefc9de9bc5cb13a.scope: Deactivated successfully.
Jan 26 09:45:58 compute-0 sudo[111965]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:58 compute-0 sudo[112264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:45:58 compute-0 sudo[112264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:58 compute-0 sudo[112264]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:58 compute-0 sudo[112289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:45:58 compute-0 sudo[112289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:45:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:45:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:45:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:45:58.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:45:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 26 09:45:58 compute-0 podman[112354]: 2026-01-26 09:45:58.894237967 +0000 UTC m=+0.053286310 container create add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 09:45:58 compute-0 systemd[1]: Started libpod-conmon-add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83.scope.
Jan 26 09:45:58 compute-0 podman[112354]: 2026-01-26 09:45:58.870183928 +0000 UTC m=+0.029232261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:45:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:45:58 compute-0 podman[112354]: 2026-01-26 09:45:58.981434465 +0000 UTC m=+0.140482808 container init add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:45:58 compute-0 podman[112354]: 2026-01-26 09:45:58.988649683 +0000 UTC m=+0.147697996 container start add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 26 09:45:58 compute-0 podman[112354]: 2026-01-26 09:45:58.991944253 +0000 UTC m=+0.150992616 container attach add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:45:58 compute-0 hopeful_shirley[112371]: 167 167
Jan 26 09:45:58 compute-0 systemd[1]: libpod-add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83.scope: Deactivated successfully.
Jan 26 09:45:58 compute-0 podman[112354]: 2026-01-26 09:45:58.99547861 +0000 UTC m=+0.154526923 container died add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcdc9197aa78d26e0f09dca169eb9f7f609142be736ba59bfe25c4e7cbfac781-merged.mount: Deactivated successfully.
Jan 26 09:45:59 compute-0 podman[112354]: 2026-01-26 09:45:59.038140218 +0000 UTC m=+0.197188521 container remove add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_shirley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:45:59 compute-0 systemd[1]: libpod-conmon-add7000998af74e9ec6da969262f91ef98c2409625120c298c7ef6f0951f9d83.scope: Deactivated successfully.
Jan 26 09:45:59 compute-0 ceph-mon[74456]: pgmap v105: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 26 09:45:59 compute-0 podman[112395]: 2026-01-26 09:45:59.186730136 +0000 UTC m=+0.028464980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:45:59 compute-0 podman[112395]: 2026-01-26 09:45:59.361652556 +0000 UTC m=+0.203387350 container create 95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:45:59 compute-0 systemd[1]: Started libpod-conmon-95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6.scope.
Jan 26 09:45:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18ae8803b50f74fd5e92a7d523c8f64e90a42a22c27c45a3ba4f4b922ab48d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18ae8803b50f74fd5e92a7d523c8f64e90a42a22c27c45a3ba4f4b922ab48d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18ae8803b50f74fd5e92a7d523c8f64e90a42a22c27c45a3ba4f4b922ab48d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f18ae8803b50f74fd5e92a7d523c8f64e90a42a22c27c45a3ba4f4b922ab48d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:45:59 compute-0 podman[112395]: 2026-01-26 09:45:59.466382735 +0000 UTC m=+0.308117549 container init 95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 09:45:59 compute-0 sudo[112219]: pam_unix(sudo:session): session closed for user root
Jan 26 09:45:59 compute-0 podman[112395]: 2026-01-26 09:45:59.481476368 +0000 UTC m=+0.323211132 container start 95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:45:59 compute-0 podman[112395]: 2026-01-26 09:45:59.492373547 +0000 UTC m=+0.334108321 container attach 95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:45:59 compute-0 sudo[112612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqijbhwedpzafksutcauygugemfosen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420759.6603158-817-65272236504890/AnsiballZ_file.py'
Jan 26 09:45:59 compute-0 sudo[112612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[98077]: 26/01/2026 09:46:00 : epoch 6977372f : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb904002620 fd 49 proxy ignored for local
Jan 26 09:46:00 compute-0 kernel: ganesha.nfsd[108807]: segfault at 50 ip 00007fb99a79332e sp 00007fb90fffe210 error 4 in libntirpc.so.5.8[7fb99a778000+2c000] likely on CPU 4 (core 0, socket 4)
Jan 26 09:46:00 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:46:00 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Jan 26 09:46:00 compute-0 lvm[112639]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:46:00 compute-0 lvm[112639]: VG ceph_vg0 finished
Jan 26 09:46:00 compute-0 systemd[1]: Started Process Core Dump (PID 112638/UID 0).
Jan 26 09:46:00 compute-0 python3.9[112620]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:46:00 compute-0 xenodochial_hermann[112412]: {}
Jan 26 09:46:00 compute-0 sudo[112612]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:00 compute-0 systemd[1]: libpod-95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6.scope: Deactivated successfully.
Jan 26 09:46:00 compute-0 systemd[1]: libpod-95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6.scope: Consumed 1.274s CPU time.
Jan 26 09:46:00 compute-0 conmon[112412]: conmon 95f4c732017a2499571d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6.scope/container/memory.events
Jan 26 09:46:00 compute-0 podman[112395]: 2026-01-26 09:46:00.233331977 +0000 UTC m=+1.075066751 container died 95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 26 09:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f18ae8803b50f74fd5e92a7d523c8f64e90a42a22c27c45a3ba4f4b922ab48d-merged.mount: Deactivated successfully.
Jan 26 09:46:00 compute-0 podman[112395]: 2026-01-26 09:46:00.287952103 +0000 UTC m=+1.129686897 container remove 95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:46:00 compute-0 systemd[1]: libpod-conmon-95f4c732017a2499571d28b1b70e6b267c4f3a28767209cd8151c314f5fc81e6.scope: Deactivated successfully.
Jan 26 09:46:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:00.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:00 compute-0 sudo[112289]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:46:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:46:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:46:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:46:00 compute-0 sudo[112690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:46:00 compute-0 sudo[112690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:46:00 compute-0 sudo[112690]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:00.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 26 09:46:00 compute-0 sudo[112832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rukmbbuejwwczjcondtfzpkrgbclikld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420760.406189-841-172166484239736/AnsiballZ_stat.py'
Jan 26 09:46:00 compute-0 sudo[112832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:00 compute-0 python3.9[112834]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:46:00 compute-0 sudo[112832]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:01 compute-0 sudo[112910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsduqcpqdsnlnhkwzwseinzyzcduphgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420760.406189-841-172166484239736/AnsiballZ_file.py'
Jan 26 09:46:01 compute-0 sudo[112910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:01 compute-0 systemd-coredump[112640]: Process 98081 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 64:
                                                    #0  0x00007fb99a79332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:46:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:46:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:46:01 compute-0 ceph-mon[74456]: pgmap v106: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 26 09:46:01 compute-0 python3.9[112912]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:46:01 compute-0 sudo[112910]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:01 compute-0 systemd[1]: systemd-coredump@0-112638-0.service: Deactivated successfully.
Jan 26 09:46:01 compute-0 systemd[1]: systemd-coredump@0-112638-0.service: Consumed 1.163s CPU time.
Jan 26 09:46:01 compute-0 podman[112923]: 2026-01-26 09:46:01.521568043 +0000 UTC m=+0.029338624 container died d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e8fcb101861e368803c33a29bf93002c5f91e6b91c443454877fb31bb48be69-merged.mount: Deactivated successfully.
Jan 26 09:46:01 compute-0 podman[112923]: 2026-01-26 09:46:01.56017163 +0000 UTC m=+0.067942131 container remove d3395b53724857015134a8bdb584007eb1b94a5b002c559505dba80a9d92ea83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:46:01 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:46:01 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:46:01 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.610s CPU time.
Jan 26 09:46:02 compute-0 sudo[113109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diyfuxcaskriyocwtivszrvfziqdpgue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420761.7439468-880-236006905164827/AnsiballZ_stat.py'
Jan 26 09:46:02 compute-0 sudo[113109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:02 compute-0 python3.9[113111]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:46:02 compute-0 sudo[113109]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:02.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:02 compute-0 sudo[113189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uspscpochiimktphfhndzjajwqdxrzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420761.7439468-880-236006905164827/AnsiballZ_file.py'
Jan 26 09:46:02 compute-0 sudo[113189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:02.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:02 compute-0 python3.9[113191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:46:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:02 compute-0 sudo[113189]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:03 compute-0 ceph-mon[74456]: pgmap v107: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:46:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:03 compute-0 sudo[113341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxlatgryhwsjdapatsqwzgldlqkjlct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420763.5199566-925-190103127171240/AnsiballZ_dnf.py'
Jan 26 09:46:03 compute-0 sudo[113341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:03 compute-0 python3.9[113343]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:46:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:04.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:04.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:05 compute-0 sudo[113341]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:05 compute-0 ceph-mon[74456]: pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094606 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:46:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:06.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:06 compute-0 python3.9[113496]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:46:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:06.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:46:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:06] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:46:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:06] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:46:07 compute-0 ceph-mgr[74755]: [dashboard INFO request] [192.168.122.100:60650] [POST] [200] [0.121s] [4.0B] [71aebb21-7803-43b8-bf9e-d77effe13657] /api/prometheus_receiver
Jan 26 09:46:07 compute-0 python3.9[113650]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 26 09:46:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:07 compute-0 ceph-mon[74456]: pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:46:07 compute-0 python3.9[113800]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:46:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:08.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:08.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:09 compute-0 sudo[113952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meyubnlmhdhjpkquomatqofieensjyxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420768.5308716-1048-153809407391172/AnsiballZ_systemd.py'
Jan 26 09:46:09 compute-0 sudo[113952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:09 compute-0 python3.9[113954]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:46:09 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 26 09:46:09 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 26 09:46:09 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 26 09:46:09 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 09:46:09 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 09:46:09 compute-0 sudo[113952]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:09 compute-0 ceph-mon[74456]: pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:10.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:10.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:10 compute-0 python3.9[114117]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 26 09:46:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:11 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 1.
Jan 26 09:46:11 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:46:11 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.610s CPU time.
Jan 26 09:46:11 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:46:11 compute-0 ceph-mon[74456]: pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:12 compute-0 podman[114191]: 2026-01-26 09:46:12.021364716 +0000 UTC m=+0.042755323 container create 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 09:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bccb38f04fabb246efda2931563429c9811b6bbbf32cf72496e4366401b408/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bccb38f04fabb246efda2931563429c9811b6bbbf32cf72496e4366401b408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bccb38f04fabb246efda2931563429c9811b6bbbf32cf72496e4366401b408/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bccb38f04fabb246efda2931563429c9811b6bbbf32cf72496e4366401b408/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:46:12 compute-0 podman[114191]: 2026-01-26 09:46:12.076218618 +0000 UTC m=+0.097609225 container init 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:46:12 compute-0 podman[114191]: 2026-01-26 09:46:12.081878503 +0000 UTC m=+0.103269090 container start 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:46:12 compute-0 bash[114191]: 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d
Jan 26 09:46:12 compute-0 podman[114191]: 2026-01-26 09:46:12.001574924 +0000 UTC m=+0.022965561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:46:12 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:46:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:12 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:46:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:12.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:12.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:14 compute-0 sudo[114375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quwclicrytktwvuanbsvhfkfsxlqgsyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420773.7354584-1219-149302604337490/AnsiballZ_systemd.py'
Jan 26 09:46:14 compute-0 sudo[114375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:14 compute-0 ceph-mon[74456]: pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:14.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:14 compute-0 python3.9[114377]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:46:14 compute-0 sudo[114375]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:14.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:14 compute-0 sudo[114531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaaaqalwsijibakguujptmntrcirhzmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420774.521873-1219-102039858168663/AnsiballZ_systemd.py'
Jan 26 09:46:14 compute-0 sudo[114531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:15 compute-0 python3.9[114533]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:46:15 compute-0 sudo[114534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:46:15 compute-0 sudo[114534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:46:15 compute-0 sudo[114534]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:15 compute-0 ceph-mon[74456]: pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:46:15 compute-0 sudo[114531]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:15 compute-0 sshd-session[105874]: Connection closed by 192.168.122.30 port 47224
Jan 26 09:46:15 compute-0 sshd-session[105836]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:46:15 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 26 09:46:15 compute-0 systemd[1]: session-39.scope: Consumed 1min 5.780s CPU time.
Jan 26 09:46:15 compute-0 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Jan 26 09:46:15 compute-0 systemd-logind[787]: Removed session 39.
Jan 26 09:46:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:16.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:16.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:16] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:46:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:16] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Jan 26 09:46:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:16.951Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:46:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:16.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:46:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:17 compute-0 sshd-session[114587]: Invalid user test from 157.245.76.178 port 42770
Jan 26 09:46:17 compute-0 ceph-mon[74456]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:17 compute-0 sshd-session[114587]: Connection closed by invalid user test 157.245.76.178 port 42770 [preauth]
Jan 26 09:46:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:18 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:46:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:18 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:46:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:18.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:46:18
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', '.nfs', 'default.rgw.log', 'backups', 'vms', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:46:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:46:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:46:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:46:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:46:19 compute-0 ceph-mon[74456]: pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:20.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:46:21 compute-0 sshd-session[114593]: Accepted publickey for zuul from 192.168.122.30 port 54882 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:46:21 compute-0 systemd-logind[787]: New session 40 of user zuul.
Jan 26 09:46:21 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 26 09:46:21 compute-0 sshd-session[114593]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:46:21 compute-0 ceph-mon[74456]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:46:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:22.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:22 compute-0 python3.9[114746]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:46:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:46:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:23 compute-0 sudo[114902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtpptijdbgeendqotgaxgrnmdsvcnsjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420783.2214723-63-262703447767441/AnsiballZ_getent.py'
Jan 26 09:46:23 compute-0 sudo[114902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:23 compute-0 ceph-mon[74456]: pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:46:23 compute-0 python3.9[114904]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 26 09:46:23 compute-0 sudo[114902]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:46:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:24 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d7c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:24.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:46:24 compute-0 sudo[115073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fenbhjgikzfjpdgptakvxfffobgcwbsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420784.4504125-99-280545261321037/AnsiballZ_setup.py'
Jan 26 09:46:24 compute-0 sudo[115073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:25 compute-0 python3.9[115075]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:46:25 compute-0 sudo[115073]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:25 compute-0 sudo[115157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpbawoesongvrkfoapngftsxeeauxjbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420784.4504125-99-280545261321037/AnsiballZ_dnf.py'
Jan 26 09:46:25 compute-0 sudo[115157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:25 compute-0 python3.9[115159]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 09:46:25 compute-0 ceph-mon[74456]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:46:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:26 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:26 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d60000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:26 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:26.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:46:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:26] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Jan 26 09:46:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:26] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Jan 26 09:46:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:26.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:46:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:26.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:46:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:26.952Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:46:27 compute-0 sudo[115157]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:27 compute-0 sudo[115312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqmuxrnwpzopljyrlmdxlukoooilnhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420787.5189993-141-180012433550938/AnsiballZ_dnf.py'
Jan 26 09:46:27 compute-0 sudo[115312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:27 compute-0 ceph-mon[74456]: pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:46:28 compute-0 python3.9[115314]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:46:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094628 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:46:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:28 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d6c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:28 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:28 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d600016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:46:29 compute-0 sudo[115312]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:29 compute-0 ceph-mon[74456]: pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:46:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:30 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:30 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d6c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:30 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:30.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:30 compute-0 sudo[115469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwlulykwobbbmqnsdpfbhknyktiiqswn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420789.7578611-165-48487653415723/AnsiballZ_systemd.py'
Jan 26 09:46:30 compute-0 sudo[115469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:30.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:46:30 compute-0 python3.9[115471]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:46:30 compute-0 sudo[115469]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:31 compute-0 python3.9[115624]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:46:31 compute-0 ceph-mon[74456]: pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:46:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:32 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d600016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:32 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:32 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d6c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:32 compute-0 sudo[115776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcqjxolohgnihabbkdieauxyomsdkvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420792.0962584-219-18809046882017/AnsiballZ_sefcontext.py'
Jan 26 09:46:32 compute-0 sudo[115776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:33 compute-0 python3.9[115778]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 26 09:46:33 compute-0 sudo[115776]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:46:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:33 compute-0 ceph-mon[74456]: pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:34 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:34 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d600016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:34 compute-0 python3.9[115928]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:46:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:34 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:34.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:35 compute-0 sudo[116086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvfnvnrzepmyzzjxoeocpsvmcoztzrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420794.798915-273-108365242214671/AnsiballZ_dnf.py'
Jan 26 09:46:35 compute-0 sudo[116086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:35 compute-0 sudo[116089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:46:35 compute-0 sudo[116089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:46:35 compute-0 sudo[116089]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:35 compute-0 python3.9[116088]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:46:35 compute-0 ceph-mon[74456]: pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:36 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d6c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:36 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:36 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:36.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 26 09:46:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 26 09:46:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:36 compute-0 sudo[116086]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:46:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:46:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:36.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:46:37 compute-0 sudo[116266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yitjldhngggkglatzippfbsjorkwfplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420797.051799-297-236472499688478/AnsiballZ_command.py'
Jan 26 09:46:37 compute-0 sudo[116266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:37 compute-0 python3.9[116268]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:46:38 compute-0 ceph-mon[74456]: pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:46:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:38 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:38 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:38 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d60002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:38.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:38 compute-0 sudo[116266]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:38.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:39 compute-0 sudo[116555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fndtxwsdhivqqczjqyrquurgzhmpbnnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420798.6711621-321-125096563776881/AnsiballZ_file.py'
Jan 26 09:46:39 compute-0 sudo[116555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:39 compute-0 python3.9[116557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 26 09:46:39 compute-0 sudo[116555]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:40 compute-0 ceph-mon[74456]: pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:40 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:40 compute-0 python3.9[116707]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:46:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:40 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:40 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:40.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:40 compute-0 sudo[116861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mytweshisbqfbslaefmwcyuxiedtvpxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420800.4488115-369-246959289133956/AnsiballZ_dnf.py'
Jan 26 09:46:40 compute-0 sudo[116861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:40 compute-0 python3.9[116863]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:46:42 compute-0 ceph-mon[74456]: pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:42 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d60002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:42 compute-0 sudo[116861]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:42 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:42 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:43 compute-0 ceph-mon[74456]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:44 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:44 compute-0 sudo[117016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qceqzfntunwacbcwajmkwzuuiywkkvgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420803.8836784-396-268671075129406/AnsiballZ_dnf.py'
Jan 26 09:46:44 compute-0 sudo[117016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:44 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d60003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:44 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:44.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:44 compute-0 python3.9[117018]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:46:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 26 09:46:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 26 09:46:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:45 compute-0 ceph-mon[74456]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:45 compute-0 sudo[117016]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:46 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:46 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:46 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d60003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:46.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:46 compute-0 sudo[117173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgeudaugrwkdokofpbtbzadvenqbipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420806.2671764-432-272436330541657/AnsiballZ_stat.py'
Jan 26 09:46:46 compute-0 sudo[117173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:46:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:46] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:46:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:46] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:46:46 compute-0 python3.9[117175]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:46:46 compute-0 sudo[117173]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:46.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:46:47 compute-0 sudo[117327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeosdcarunpurwpdqlyrkzybiwxwqdjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420807.0383155-456-102173483324907/AnsiballZ_slurp.py'
Jan 26 09:46:47 compute-0 sudo[117327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:46:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:47 compute-0 python3.9[117329]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 26 09:46:47 compute-0 sudo[117327]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:47 compute-0 ceph-mon[74456]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:46:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:48 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:48 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:48 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:46:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:46:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:46:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:46:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:46:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:46:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:46:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:46:48 compute-0 sshd-session[114596]: Connection closed by 192.168.122.30 port 54882
Jan 26 09:46:48 compute-0 sshd-session[114593]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:46:48 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 26 09:46:48 compute-0 systemd[1]: session-40.scope: Consumed 18.246s CPU time.
Jan 26 09:46:48 compute-0 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Jan 26 09:46:48 compute-0 systemd-logind[787]: Removed session 40.
Jan 26 09:46:49 compute-0 ceph-mon[74456]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:50 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d60003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:50 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:50 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:51 compute-0 ceph-mon[74456]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:52 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:52 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d60003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:52 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:52.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:52.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:53 compute-0 sshd-session[117360]: Accepted publickey for zuul from 192.168.122.30 port 36328 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:46:53 compute-0 systemd-logind[787]: New session 41 of user zuul.
Jan 26 09:46:53 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 26 09:46:53 compute-0 sshd-session[117360]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:46:53 compute-0 ceph-mon[74456]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:54 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:54 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:54 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:46:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:54.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:46:54 compute-0 python3.9[117513]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:46:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:55 compute-0 sudo[117670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:46:55 compute-0 sudo[117670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:46:55 compute-0 sudo[117670]: pam_unix(sudo:session): session closed for user root
Jan 26 09:46:55 compute-0 python3.9[117669]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:46:55 compute-0 ceph-mon[74456]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:56 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:56 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:56 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:56.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:46:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:56] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Jan 26 09:46:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:46:56] "GET /metrics HTTP/1.1" 200 48264 "" "Prometheus/2.51.0"
Jan 26 09:46:56 compute-0 python3.9[117890]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:46:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:56.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:46:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:56.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:46:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:46:56.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:46:57 compute-0 sshd-session[117363]: Connection closed by 192.168.122.30 port 36328
Jan 26 09:46:57 compute-0 sshd-session[117360]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:46:57 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 26 09:46:57 compute-0 systemd[1]: session-41.scope: Consumed 2.521s CPU time.
Jan 26 09:46:57 compute-0 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Jan 26 09:46:57 compute-0 systemd-logind[787]: Removed session 41.
Jan 26 09:46:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:46:57 compute-0 ceph-mon[74456]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:46:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:58 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d58003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:58 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:46:58 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:46:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:46:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:46:58.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:46:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:46:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:46:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:46:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:46:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:46:58 compute-0 sshd-session[117919]: Invalid user test from 157.245.76.178 port 41964
Jan 26 09:46:59 compute-0 sshd-session[117919]: Connection closed by invalid user test 157.245.76.178 port 41964 [preauth]
Jan 26 09:46:59 compute-0 ceph-mon[74456]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:00 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d48000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:00 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:00 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d500016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:47:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:00.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:47:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:00.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:00 compute-0 sudo[117923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:47:00 compute-0 sudo[117923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:00 compute-0 sudo[117923]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:00 compute-0 sudo[117948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:47:00 compute-0 sudo[117948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:01 compute-0 podman[118047]: 2026-01-26 09:47:01.410561775 +0000 UTC m=+0.074848729 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 09:47:01 compute-0 podman[118068]: 2026-01-26 09:47:01.60733427 +0000 UTC m=+0.062641876 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:47:01 compute-0 podman[118047]: 2026-01-26 09:47:01.612521972 +0000 UTC m=+0.276808826 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 09:47:01 compute-0 ceph-mon[74456]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:02 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:02 compute-0 podman[118167]: 2026-01-26 09:47:02.168495566 +0000 UTC m=+0.072042673 container exec 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:47:02 compute-0 podman[118167]: 2026-01-26 09:47:02.206852635 +0000 UTC m=+0.110399702 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:47:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:02 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d480016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:02 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:02.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:02.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:02 compute-0 podman[118260]: 2026-01-26 09:47:02.764649068 +0000 UTC m=+0.164051760 container exec 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:47:02 compute-0 podman[118260]: 2026-01-26 09:47:02.778678042 +0000 UTC m=+0.178080754 container exec_died 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:47:02 compute-0 sshd-session[118274]: Accepted publickey for zuul from 192.168.122.30 port 38310 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:47:02 compute-0 systemd-logind[787]: New session 42 of user zuul.
Jan 26 09:47:02 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 26 09:47:02 compute-0 sshd-session[118274]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:47:03 compute-0 podman[118375]: 2026-01-26 09:47:03.328602931 +0000 UTC m=+0.265318381 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:47:03 compute-0 podman[118375]: 2026-01-26 09:47:03.365624264 +0000 UTC m=+0.302339654 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:47:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:47:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:03 compute-0 podman[118495]: 2026-01-26 09:47:03.687288876 +0000 UTC m=+0.121344751 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, architecture=x86_64, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Jan 26 09:47:03 compute-0 podman[118561]: 2026-01-26 09:47:03.778384178 +0000 UTC m=+0.071993150 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git)
Jan 26 09:47:03 compute-0 podman[118495]: 2026-01-26 09:47:03.783850808 +0000 UTC m=+0.217906583 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-type=git, release=1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container)
Jan 26 09:47:03 compute-0 ceph-mon[74456]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:03 compute-0 python3.9[118554]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:47:04 compute-0 podman[118610]: 2026-01-26 09:47:04.026562 +0000 UTC m=+0.060947239 container exec c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:47:04 compute-0 podman[118610]: 2026-01-26 09:47:04.090684614 +0000 UTC m=+0.125069823 container exec_died c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:47:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:04 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:04 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:04 compute-0 podman[118704]: 2026-01-26 09:47:04.288405795 +0000 UTC m=+0.051310105 container exec ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:47:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:04 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:04 compute-0 podman[118704]: 2026-01-26 09:47:04.481073527 +0000 UTC m=+0.243977827 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:47:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:04.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:04 compute-0 podman[118948]: 2026-01-26 09:47:04.862876395 +0000 UTC m=+0.056387454 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:47:04 compute-0 podman[118948]: 2026-01-26 09:47:04.902262202 +0000 UTC m=+0.095773261 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:47:04 compute-0 python3.9[118915]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:47:04 compute-0 sudo[117948]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:47:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:47:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:05 compute-0 sudo[118995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:47:05 compute-0 sudo[118995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:05 compute-0 sudo[118995]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:05 compute-0 sudo[119020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:47:05 compute-0 sudo[119020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:05 compute-0 sudo[119020]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:47:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:47:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:47:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:47:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:47:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:47:05 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:47:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:47:05 compute-0 sudo[119225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjeygkmbnwjxtblmwfigngsdmehtlumz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420825.3167326-75-150510814516672/AnsiballZ_setup.py'
Jan 26 09:47:05 compute-0 sudo[119225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:05 compute-0 sudo[119226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:47:05 compute-0 sudo[119226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:05 compute-0 sudo[119226]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:05 compute-0 sudo[119253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:47:05 compute-0 sudo[119253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:05 compute-0 python3.9[119235]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:47:05 compute-0 ceph-mon[74456]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:47:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:47:06 compute-0 podman[119328]: 2026-01-26 09:47:06.123104909 +0000 UTC m=+0.039762258 container create 1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 09:47:06 compute-0 sudo[119225]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:06 compute-0 systemd[1]: Started libpod-conmon-1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3.scope.
Jan 26 09:47:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:06 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d74003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:47:06 compute-0 podman[119328]: 2026-01-26 09:47:06.195094959 +0000 UTC m=+0.111752388 container init 1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:47:06 compute-0 podman[119328]: 2026-01-26 09:47:06.104933412 +0000 UTC m=+0.021590781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:47:06 compute-0 podman[119328]: 2026-01-26 09:47:06.203450288 +0000 UTC m=+0.120107657 container start 1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:47:06 compute-0 podman[119328]: 2026-01-26 09:47:06.206730148 +0000 UTC m=+0.123387517 container attach 1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:47:06 compute-0 wonderful_swartz[119344]: 167 167
Jan 26 09:47:06 compute-0 systemd[1]: libpod-1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3.scope: Deactivated successfully.
Jan 26 09:47:06 compute-0 podman[119328]: 2026-01-26 09:47:06.209601927 +0000 UTC m=+0.126259546 container died 1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:47:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f78c70bbeff62bbaced53a5b50ba917b55c98817597065ab15d3e4b7b21e1ef2-merged.mount: Deactivated successfully.
Jan 26 09:47:06 compute-0 podman[119328]: 2026-01-26 09:47:06.252586693 +0000 UTC m=+0.169244052 container remove 1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:47:06 compute-0 systemd[1]: libpod-conmon-1375735e674961fd0ac24314965ec99c352c90e1aec33edb5eefe934d47c7cc3.scope: Deactivated successfully.
Jan 26 09:47:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:06 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:06 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d480016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:06.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:06 compute-0 podman[119373]: 2026-01-26 09:47:06.494378879 +0000 UTC m=+0.066606943 container create 4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 09:47:06 compute-0 systemd[1]: Started libpod-conmon-4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802.scope.
Jan 26 09:47:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:06 compute-0 podman[119373]: 2026-01-26 09:47:06.464265185 +0000 UTC m=+0.036493279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:47:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab056760adeb2be800f406733efda7a697dcc175ffdc7e4cccdc700609dc53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab056760adeb2be800f406733efda7a697dcc175ffdc7e4cccdc700609dc53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab056760adeb2be800f406733efda7a697dcc175ffdc7e4cccdc700609dc53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab056760adeb2be800f406733efda7a697dcc175ffdc7e4cccdc700609dc53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab056760adeb2be800f406733efda7a697dcc175ffdc7e4cccdc700609dc53/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:06 compute-0 podman[119373]: 2026-01-26 09:47:06.591055204 +0000 UTC m=+0.163283308 container init 4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_moore, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:47:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:47:06 compute-0 podman[119373]: 2026-01-26 09:47:06.60147427 +0000 UTC m=+0.173702284 container start 4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 09:47:06 compute-0 podman[119373]: 2026-01-26 09:47:06.604961795 +0000 UTC m=+0.177189809 container attach 4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:47:06 compute-0 sudo[119463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fujdtplfohpsxrzfoxsommqitlchtliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420825.3167326-75-150510814516672/AnsiballZ_dnf.py'
Jan 26 09:47:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:06] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:47:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:06] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:47:06 compute-0 sudo[119463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:06 compute-0 python3.9[119465]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:47:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:06.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:47:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:06.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:47:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:06.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:47:06 compute-0 youthful_moore[119432]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:47:06 compute-0 youthful_moore[119432]: --> All data devices are unavailable
Jan 26 09:47:06 compute-0 systemd[1]: libpod-4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802.scope: Deactivated successfully.
Jan 26 09:47:06 compute-0 podman[119373]: 2026-01-26 09:47:06.998904015 +0000 UTC m=+0.571132039 container died 4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dab056760adeb2be800f406733efda7a697dcc175ffdc7e4cccdc700609dc53-merged.mount: Deactivated successfully.
Jan 26 09:47:07 compute-0 podman[119373]: 2026-01-26 09:47:07.042376705 +0000 UTC m=+0.614604719 container remove 4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_moore, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:47:07 compute-0 systemd[1]: libpod-conmon-4038aa173b9aa2f96645b5ee91570958238a7d81935b58390517b8c95427f802.scope: Deactivated successfully.
Jan 26 09:47:07 compute-0 sudo[119253]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:07 compute-0 sudo[119490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:47:07 compute-0 sudo[119490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:07 compute-0 sudo[119490]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:07 compute-0 sudo[119515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:47:07 compute-0 sudo[119515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:07 compute-0 podman[119581]: 2026-01-26 09:47:07.603934521 +0000 UTC m=+0.044378875 container create 81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:47:07 compute-0 systemd[1]: Started libpod-conmon-81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0.scope.
Jan 26 09:47:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:47:07 compute-0 podman[119581]: 2026-01-26 09:47:07.677141524 +0000 UTC m=+0.117585918 container init 81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_greider, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 09:47:07 compute-0 podman[119581]: 2026-01-26 09:47:07.582121494 +0000 UTC m=+0.022565928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:47:07 compute-0 podman[119581]: 2026-01-26 09:47:07.68247019 +0000 UTC m=+0.122914574 container start 81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_greider, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 09:47:07 compute-0 podman[119581]: 2026-01-26 09:47:07.686319746 +0000 UTC m=+0.126764150 container attach 81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_greider, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 26 09:47:07 compute-0 stoic_greider[119597]: 167 167
Jan 26 09:47:07 compute-0 systemd[1]: libpod-81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0.scope: Deactivated successfully.
Jan 26 09:47:07 compute-0 podman[119581]: 2026-01-26 09:47:07.688603018 +0000 UTC m=+0.129047432 container died 81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cacae53e8ad4523b86d14eeb5920923f264b3ef6e7244f605db1c3b8cb8e0c7d-merged.mount: Deactivated successfully.
Jan 26 09:47:07 compute-0 podman[119581]: 2026-01-26 09:47:07.758300495 +0000 UTC m=+0.198744859 container remove 81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_greider, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:47:07 compute-0 systemd[1]: libpod-conmon-81ed34128e283e7b0a1dfbcf388e3ce203a4bc5ddc74a76ec762012435de2dd0.scope: Deactivated successfully.
Jan 26 09:47:07 compute-0 podman[119621]: 2026-01-26 09:47:07.939519274 +0000 UTC m=+0.044503578 container create 30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_merkle, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:47:07 compute-0 systemd[1]: Started libpod-conmon-30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422.scope.
Jan 26 09:47:07 compute-0 ceph-mon[74456]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:47:08 compute-0 podman[119621]: 2026-01-26 09:47:07.919942388 +0000 UTC m=+0.024926722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:47:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d04ed0227c940fc4127cf231b17a6f136968cfea2052c629d9053461bb890b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d04ed0227c940fc4127cf231b17a6f136968cfea2052c629d9053461bb890b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d04ed0227c940fc4127cf231b17a6f136968cfea2052c629d9053461bb890b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d04ed0227c940fc4127cf231b17a6f136968cfea2052c629d9053461bb890b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:08 compute-0 podman[119621]: 2026-01-26 09:47:08.041982208 +0000 UTC m=+0.146966532 container init 30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:47:08 compute-0 podman[119621]: 2026-01-26 09:47:08.054762828 +0000 UTC m=+0.159747162 container start 30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:47:08 compute-0 podman[119621]: 2026-01-26 09:47:08.059493457 +0000 UTC m=+0.164477761 container attach 30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:47:08 compute-0 sudo[119463]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[114206]: 26/01/2026 09:47:08 : epoch 697737e4 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d5c003f50 fd 39 proxy ignored for local
Jan 26 09:47:08 compute-0 kernel: ganesha.nfsd[114946]: segfault at 50 ip 00007f9e075b632e sp 00007f9d677fd210 error 4 in libntirpc.so.5.8[7f9e0759b000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 26 09:47:08 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:47:08 compute-0 systemd[1]: Started Process Core Dump (PID 119642/UID 0).
Jan 26 09:47:08 compute-0 interesting_merkle[119637]: {
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:     "0": [
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:         {
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "devices": [
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "/dev/loop3"
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             ],
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "lv_name": "ceph_lv0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "lv_size": "21470642176",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "name": "ceph_lv0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "tags": {
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.cluster_name": "ceph",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.crush_device_class": "",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.encrypted": "0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.osd_id": "0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.type": "block",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.vdo": "0",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:                 "ceph.with_tpm": "0"
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             },
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "type": "block",
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:             "vg_name": "ceph_vg0"
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:         }
Jan 26 09:47:08 compute-0 interesting_merkle[119637]:     ]
Jan 26 09:47:08 compute-0 interesting_merkle[119637]: }
Jan 26 09:47:08 compute-0 systemd[1]: libpod-30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422.scope: Deactivated successfully.
Jan 26 09:47:08 compute-0 podman[119621]: 2026-01-26 09:47:08.38921905 +0000 UTC m=+0.494203364 container died 30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_merkle, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:47:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:08.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d04ed0227c940fc4127cf231b17a6f136968cfea2052c629d9053461bb890b8-merged.mount: Deactivated successfully.
Jan 26 09:47:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:08.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:08 compute-0 podman[119621]: 2026-01-26 09:47:08.570802978 +0000 UTC m=+0.675787282 container remove 30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:47:08 compute-0 systemd[1]: libpod-conmon-30373ffffc08170fcd9bff77e8b33f9e44d70fd58fbf1a498c54fccfe8ec7422.scope: Deactivated successfully.
Jan 26 09:47:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:08 compute-0 sudo[119515]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:08 compute-0 sudo[119785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:47:08 compute-0 sudo[119785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:08 compute-0 sudo[119785]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:08 compute-0 sudo[119835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxkgodojsrjeydkqqmcvwuefqwgxvgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420828.3972976-111-66459706355469/AnsiballZ_setup.py'
Jan 26 09:47:08 compute-0 sudo[119835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:08 compute-0 sudo[119838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:47:08 compute-0 sudo[119838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:08 compute-0 python3.9[119839]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:47:09 compute-0 podman[119918]: 2026-01-26 09:47:09.133204268 +0000 UTC m=+0.044467338 container create da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:47:09 compute-0 systemd[1]: Started libpod-conmon-da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f.scope.
Jan 26 09:47:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:47:09 compute-0 podman[119918]: 2026-01-26 09:47:09.112117541 +0000 UTC m=+0.023380611 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:47:09 compute-0 podman[119918]: 2026-01-26 09:47:09.215994104 +0000 UTC m=+0.127257224 container init da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Jan 26 09:47:09 compute-0 podman[119918]: 2026-01-26 09:47:09.224797154 +0000 UTC m=+0.136060254 container start da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Jan 26 09:47:09 compute-0 lucid_brattain[119950]: 167 167
Jan 26 09:47:09 compute-0 systemd[1]: libpod-da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f.scope: Deactivated successfully.
Jan 26 09:47:09 compute-0 podman[119918]: 2026-01-26 09:47:09.233243485 +0000 UTC m=+0.144506555 container attach da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:47:09 compute-0 podman[119918]: 2026-01-26 09:47:09.233989516 +0000 UTC m=+0.145252596 container died da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brattain, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:47:09 compute-0 systemd-coredump[119651]: Process 114210 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f9e075b632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:47:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-61368c283832c20879ad3fb83c0e99a3cd898e8a5a6996b233534e0cfb00a4a3-merged.mount: Deactivated successfully.
Jan 26 09:47:09 compute-0 sudo[119835]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:09 compute-0 podman[119918]: 2026-01-26 09:47:09.282695559 +0000 UTC m=+0.193958629 container remove da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:47:09 compute-0 systemd[1]: libpod-conmon-da3d8e25e8fd6b3d47ab8b23da315eea3aba67ee4697eb61c71ce0e2218c503f.scope: Deactivated successfully.
Jan 26 09:47:09 compute-0 systemd[1]: systemd-coredump@1-119642-0.service: Deactivated successfully.
Jan 26 09:47:09 compute-0 systemd[1]: systemd-coredump@1-119642-0.service: Consumed 1.070s CPU time.
Jan 26 09:47:09 compute-0 podman[120011]: 2026-01-26 09:47:09.389151712 +0000 UTC m=+0.020979765 container died 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:47:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-93bccb38f04fabb246efda2931563429c9811b6bbbf32cf72496e4366401b408-merged.mount: Deactivated successfully.
Jan 26 09:47:09 compute-0 podman[120011]: 2026-01-26 09:47:09.420753906 +0000 UTC m=+0.052581959 container remove 53de1ffe959a6ba0031b6f2a752b30c44883690df286ecc88268a2674ae8246d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:47:09 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:47:09 compute-0 podman[120031]: 2026-01-26 09:47:09.499566354 +0000 UTC m=+0.059151711 container create 833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_meninsky, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:47:09 compute-0 systemd[1]: Started libpod-conmon-833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af.scope.
Jan 26 09:47:09 compute-0 podman[120031]: 2026-01-26 09:47:09.474384695 +0000 UTC m=+0.033970102 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:47:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb011d9400d4e618396dd1d2676d74287e7aa7def560385ac825148c02b6d1c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb011d9400d4e618396dd1d2676d74287e7aa7def560385ac825148c02b6d1c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb011d9400d4e618396dd1d2676d74287e7aa7def560385ac825148c02b6d1c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb011d9400d4e618396dd1d2676d74287e7aa7def560385ac825148c02b6d1c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:09 compute-0 podman[120031]: 2026-01-26 09:47:09.605241495 +0000 UTC m=+0.164826832 container init 833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:47:09 compute-0 podman[120031]: 2026-01-26 09:47:09.615432824 +0000 UTC m=+0.175018151 container start 833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_meninsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:47:09 compute-0 podman[120031]: 2026-01-26 09:47:09.618270901 +0000 UTC m=+0.177856228 container attach 833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:47:09 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:47:09 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.329s CPU time.
Jan 26 09:47:10 compute-0 ceph-mon[74456]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:10 compute-0 sudo[120268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euyaithzcyqbkhefberdgpanmtjftywj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420829.7332306-144-174725798535304/AnsiballZ_file.py'
Jan 26 09:47:10 compute-0 sudo[120268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:10 compute-0 lvm[120277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:47:10 compute-0 lvm[120277]: VG ceph_vg0 finished
Jan 26 09:47:10 compute-0 admiring_meninsky[120061]: {}
Jan 26 09:47:10 compute-0 systemd[1]: libpod-833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af.scope: Deactivated successfully.
Jan 26 09:47:10 compute-0 systemd[1]: libpod-833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af.scope: Consumed 1.115s CPU time.
Jan 26 09:47:10 compute-0 podman[120031]: 2026-01-26 09:47:10.362005574 +0000 UTC m=+0.921590941 container died 833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_meninsky, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:47:10 compute-0 python3.9[120272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:10 compute-0 sudo[120268]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb011d9400d4e618396dd1d2676d74287e7aa7def560385ac825148c02b6d1c8-merged.mount: Deactivated successfully.
Jan 26 09:47:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:10.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:10 compute-0 podman[120031]: 2026-01-26 09:47:10.418069938 +0000 UTC m=+0.977655305 container remove 833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_meninsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:47:10 compute-0 systemd[1]: libpod-conmon-833fcc893d115f80d229c5f2daa5b0727c1cb16033dfe8b8e2ced5dce3aae2af.scope: Deactivated successfully.
Jan 26 09:47:10 compute-0 sudo[119838]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:47:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:47:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:10 compute-0 sudo[120319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:47:10 compute-0 sudo[120319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:10 compute-0 sudo[120319]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:10.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:11 compute-0 sudo[120469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elbgkwnioygzqvefvtoolovqemdhnlfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420830.575118-168-201918164405171/AnsiballZ_command.py'
Jan 26 09:47:11 compute-0 sudo[120469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:11 compute-0 python3.9[120471]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:47:11 compute-0 sudo[120469]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:47:11 compute-0 ceph-mon[74456]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:12 compute-0 sudo[120634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjgnhewlvkrwlsujssdvihmvfbwfqee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420831.55655-192-189592208856864/AnsiballZ_stat.py'
Jan 26 09:47:12 compute-0 sudo[120634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:12 compute-0 python3.9[120636]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:47:12 compute-0 sudo[120634]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:12.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:12 compute-0 sudo[120714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxgatwrexcmxyvjssacjjexjazllmkvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420831.55655-192-189592208856864/AnsiballZ_file.py'
Jan 26 09:47:12 compute-0 sudo[120714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:12 compute-0 python3.9[120716]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:12 compute-0 sudo[120714]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:13 compute-0 sudo[120866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpgqcjaaupeaahorxsmolnjsbhhxuwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420832.9077427-228-165714316314995/AnsiballZ_stat.py'
Jan 26 09:47:13 compute-0 sudo[120866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:13 compute-0 python3.9[120868]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:47:13 compute-0 sudo[120866]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:13 compute-0 ceph-mon[74456]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:13 compute-0 sudo[120944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmchoengqkmhmjynvzohysjmnhuzifmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420832.9077427-228-165714316314995/AnsiballZ_file.py'
Jan 26 09:47:13 compute-0 sudo[120944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:13 compute-0 python3.9[120946]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:47:13 compute-0 sudo[120944]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094714 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:47:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:14.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:14 compute-0 sudo[121098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spoxlpddmoxirxmcloakluivnokfnwwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420834.1391811-267-144623847215691/AnsiballZ_ini_file.py'
Jan 26 09:47:14 compute-0 sudo[121098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:14 compute-0 python3.9[121100]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:47:14 compute-0 sudo[121098]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:14 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:47:14 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:47:15 compute-0 sudo[121251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfdctaidplvqwktcjmwojxejbpaxbwzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420834.9286044-267-270591171424464/AnsiballZ_ini_file.py'
Jan 26 09:47:15 compute-0 sudo[121251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:15 compute-0 python3.9[121253]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:47:15 compute-0 sudo[121254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:47:15 compute-0 sudo[121254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:15 compute-0 sudo[121254]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:15 compute-0 sudo[121251]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:15 compute-0 ceph-mon[74456]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:47:15 compute-0 sudo[121428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckxwlicmomfmjbwqpmvlvsyglxigkhnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420835.5913692-267-56482660123906/AnsiballZ_ini_file.py'
Jan 26 09:47:15 compute-0 sudo[121428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:16 compute-0 python3.9[121430]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:47:16 compute-0 sudo[121428]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:47:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:16] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:47:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:16] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:47:16 compute-0 sudo[121582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhjgtaykajcseyxtoqxbpaqhadqthof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420836.3284373-267-178625065849490/AnsiballZ_ini_file.py'
Jan 26 09:47:16 compute-0 sudo[121582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:16 compute-0 python3.9[121584]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:47:16 compute-0 sudo[121582]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:16.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:47:17 compute-0 sudo[121734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqvnqjietwnrdrjixosfwmmrndbfnvtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420837.2111037-360-157576212982477/AnsiballZ_dnf.py'
Jan 26 09:47:17 compute-0 sudo[121734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:17 compute-0 ceph-mon[74456]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:47:17 compute-0 python3.9[121736]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:47:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:18.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:18.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:47:18
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'backups', 'default.rgw.log', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', '.nfs']
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:47:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:47:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:47:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:47:19 compute-0 sudo[121734]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:19 compute-0 ceph-mon[74456]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:47:19 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 2.
Jan 26 09:47:19 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:47:19 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.329s CPU time.
Jan 26 09:47:19 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:47:19 compute-0 sudo[121925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlyvdrrgzoxddxtoiisreshvcehebxle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420839.660179-393-31528534292630/AnsiballZ_setup.py'
Jan 26 09:47:19 compute-0 sudo[121925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:20 compute-0 podman[121938]: 2026-01-26 09:47:20.096006074 +0000 UTC m=+0.054966016 container create deee9e05455ee19a4632830b7e1d3965523669bd607fcf6c6d188864c81f8076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:47:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af6df3f472b597cbeda041d6e30699aca4734b039036c74a0c51adb9b83a7ff/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af6df3f472b597cbeda041d6e30699aca4734b039036c74a0c51adb9b83a7ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af6df3f472b597cbeda041d6e30699aca4734b039036c74a0c51adb9b83a7ff/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af6df3f472b597cbeda041d6e30699aca4734b039036c74a0c51adb9b83a7ff/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:47:20 compute-0 podman[121938]: 2026-01-26 09:47:20.076478799 +0000 UTC m=+0.035438751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:47:20 compute-0 podman[121938]: 2026-01-26 09:47:20.179724444 +0000 UTC m=+0.138684396 container init deee9e05455ee19a4632830b7e1d3965523669bd607fcf6c6d188864c81f8076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:47:20 compute-0 podman[121938]: 2026-01-26 09:47:20.188722841 +0000 UTC m=+0.147682783 container start deee9e05455ee19a4632830b7e1d3965523669bd607fcf6c6d188864c81f8076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:47:20 compute-0 bash[121938]: deee9e05455ee19a4632830b7e1d3965523669bd607fcf6c6d188864c81f8076
Jan 26 09:47:20 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:47:20 compute-0 python3.9[121929]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:47:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:47:20 compute-0 sudo[121925]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:20.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:20.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:47:20 compute-0 sudo[122148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmjxcsctomomdfwlmpceqvssvrtzysiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420840.525697-417-200805883039364/AnsiballZ_stat.py'
Jan 26 09:47:20 compute-0 sudo[122148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:20 compute-0 python3.9[122150]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:47:20 compute-0 sudo[122148]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:21 compute-0 sudo[122300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhdejdifmpvhxpasjreuwixtgcgieosu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420841.2085967-444-140850850197726/AnsiballZ_stat.py'
Jan 26 09:47:21 compute-0 sudo[122300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:21 compute-0 python3.9[122302]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:47:21 compute-0 sudo[122300]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:21 compute-0 ceph-mon[74456]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:47:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:22.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:22.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:22 compute-0 sudo[122454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fydwzijjkytrjfxchekodxydodgqxhdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420842.0948358-474-253039929334848/AnsiballZ_command.py'
Jan 26 09:47:22 compute-0 sudo[122454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:47:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:22 compute-0 python3.9[122456]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:47:22 compute-0 sudo[122454]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094723 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:47:23 compute-0 sudo[122607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjxkjfuoeqovctxcvdllxdxtuyocociy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420843.1011045-504-242237920958270/AnsiballZ_service_facts.py'
Jan 26 09:47:23 compute-0 sudo[122607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:23 compute-0 ceph-mon[74456]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:47:23 compute-0 python3.9[122609]: ansible-service_facts Invoked
Jan 26 09:47:23 compute-0 network[122626]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:47:23 compute-0 network[122627]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:47:23 compute-0 network[122628]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:47:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:24.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:47:25 compute-0 ceph-mon[74456]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:47:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:47:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:47:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 09:47:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:26.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:26] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Jan 26 09:47:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:26] "GET /metrics HTTP/1.1" 200 48260 "" "Prometheus/2.51.0"
Jan 26 09:47:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:26.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:47:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:27 compute-0 ceph-mon[74456]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:47:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:47:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:29 compute-0 sudo[122607]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:29 compute-0 ceph-mon[74456]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:29 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:47:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:29 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:47:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:29 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:47:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:30 compute-0 sudo[122919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klfduosqdwxuojgvgjtgjokibmcisnix ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769420850.6102192-549-38876766852559/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769420850.6102192-549-38876766852559/args'
Jan 26 09:47:30 compute-0 sudo[122919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:31 compute-0 sudo[122919]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:31 compute-0 sudo[123086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlawjfjxkwhkgasoymnkelhyonskmlst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420851.4417884-582-92701895499717/AnsiballZ_dnf.py'
Jan 26 09:47:31 compute-0 sudo[123086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:31 compute-0 ceph-mon[74456]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:31 compute-0 python3.9[123088]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:47:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:47:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:47:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:33 compute-0 sudo[123086]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:47:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:33 compute-0 ceph-mon[74456]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:34.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:34 compute-0 sudo[123243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnwweqmsnzqgaswsvnzhkrussepoelze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420853.900971-621-164602088969096/AnsiballZ_package_facts.py'
Jan 26 09:47:34 compute-0 sudo[123243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:34.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:34 compute-0 python3.9[123245]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 26 09:47:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094734 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:47:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [NOTICE] 025/094734 (4) : haproxy version is 2.3.17-d1c9119
Jan 26 09:47:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [NOTICE] 025/094734 (4) : path to executable is /usr/local/sbin/haproxy
Jan 26 09:47:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [ALERT] 025/094734 (4) : backend 'backend' has no server available!
Jan 26 09:47:35 compute-0 sudo[123243]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:35 compute-0 sudo[123270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:47:35 compute-0 sudo[123270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:35 compute-0 sudo[123270]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:47:35 compute-0 ceph-mon[74456]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:47:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:35 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:47:36 compute-0 sudo[123433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfeuqiuhxxaketfhlrqrcdqajgwtebtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420855.8163285-651-38891802762539/AnsiballZ_stat.py'
Jan 26 09:47:36 compute-0 sudo[123433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:36 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:36 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:36 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:36 compute-0 python3.9[123435]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:47:36 compute-0 sudo[123433]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:36.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:47:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:36] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:47:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:36] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:47:36 compute-0 sudo[123515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acwbolkchtpxsrbnqbjnqmxhmcpleoqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420855.8163285-651-38891802762539/AnsiballZ_file.py'
Jan 26 09:47:36 compute-0 sudo[123515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:36 compute-0 python3.9[123517]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:36 compute-0 sudo[123515]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:36.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:47:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:36.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:47:37 compute-0 sudo[123667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-denwgkkxbrcshtvpopttimydndzrsvlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420857.12241-687-127813421767426/AnsiballZ_stat.py'
Jan 26 09:47:37 compute-0 sudo[123667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:37 compute-0 python3.9[123669]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:47:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:37 compute-0 ceph-mon[74456]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:47:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094738 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:47:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:47:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:38.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:47:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:38.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 26 09:47:38 compute-0 sudo[123667]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:47:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:47:38 compute-0 sudo[123747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdeyppwvavepdejfykhglptfgoxzhhfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420857.12241-687-127813421767426/AnsiballZ_file.py'
Jan 26 09:47:38 compute-0 sudo[123747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:39 compute-0 python3.9[123749]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:39 compute-0 sudo[123747]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:39 compute-0 ceph-mon[74456]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 26 09:47:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:40 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:40 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:40 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:40 compute-0 sudo[123901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eirneoragnylhoejlaaycspchyhybvpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420859.970835-741-50986491973711/AnsiballZ_lineinfile.py'
Jan 26 09:47:40 compute-0 sudo[123901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:40.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:40.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:47:40 compute-0 python3.9[123903]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:40 compute-0 sudo[123901]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:41 compute-0 sshd-session[123928]: Invalid user test from 157.245.76.178 port 53712
Jan 26 09:47:41 compute-0 sshd-session[123928]: Connection closed by invalid user test 157.245.76.178 port 53712 [preauth]
Jan 26 09:47:41 compute-0 sudo[124055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrmoeehneyoummzbcrfxgqpyaounaldc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420861.6289513-786-146618662172375/AnsiballZ_setup.py'
Jan 26 09:47:41 compute-0 sudo[124055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:41 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:47:41 compute-0 ceph-mon[74456]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:47:42 compute-0 python3.9[124057]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:47:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:42 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544001f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:42 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:42 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:42 compute-0 sudo[124055]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:42.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:47:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:42.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:47:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:47:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:42 compute-0 sudo[124141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iacwvdyqcxdurecijaorighhflddrtid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420861.6289513-786-146618662172375/AnsiballZ_systemd.py'
Jan 26 09:47:42 compute-0 sudo[124141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:43 compute-0 python3.9[124143]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:47:43 compute-0 sudo[124141]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:43 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:47:44 compute-0 ceph-mon[74456]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:47:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:44 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:44 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544001f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:44 compute-0 sshd-session[118297]: Connection closed by 192.168.122.30 port 38310
Jan 26 09:47:44 compute-0 sshd-session[118274]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:47:44 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 26 09:47:44 compute-0 systemd[1]: session-42.scope: Consumed 24.980s CPU time.
Jan 26 09:47:44 compute-0 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Jan 26 09:47:44 compute-0 systemd-logind[787]: Removed session 42.
Jan 26 09:47:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:44 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:44.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:44.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Jan 26 09:47:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094745 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:47:46 compute-0 ceph-mon[74456]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:46.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:46.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:46] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:47:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:46] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:46.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:47:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:46.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:47:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:48 compute-0 ceph-mon[74456]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Jan 26 09:47:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:48 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:48 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:48 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:48.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:47:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:48.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:47:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 26 09:47:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:47:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:47:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:47:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:47:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:47:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:47:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:47:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:47:49 compute-0 sshd-session[124176]: Accepted publickey for zuul from 192.168.122.30 port 56220 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:47:49 compute-0 systemd-logind[787]: New session 43 of user zuul.
Jan 26 09:47:49 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 26 09:47:49 compute-0 sshd-session[124176]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:47:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:49 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:47:50 compute-0 sudo[124329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifkofgnshjcwrxhhtrbompsifxgmjmip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420869.6006312-21-45943952838795/AnsiballZ_file.py'
Jan 26 09:47:50 compute-0 sudo[124329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:50 compute-0 ceph-mon[74456]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 26 09:47:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:50 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:50 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:50 compute-0 python3.9[124331]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:50 compute-0 sudo[124329]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:50 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:50.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Jan 26 09:47:51 compute-0 sudo[124483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eisvrycuhcoirzgqblxdekcjroiqvpeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420870.5724518-57-100303646960390/AnsiballZ_stat.py'
Jan 26 09:47:51 compute-0 sudo[124483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:51 compute-0 ceph-mon[74456]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Jan 26 09:47:51 compute-0 python3.9[124485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:47:51 compute-0 sudo[124483]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:51 compute-0 sudo[124561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofajabywkqvcpkycstzardgqovfolgph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420870.5724518-57-100303646960390/AnsiballZ_file.py'
Jan 26 09:47:51 compute-0 sudo[124561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:52 compute-0 python3.9[124563]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:52 compute-0 sudo[124561]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:52 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:52 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:52 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0544008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:52 compute-0 sshd-session[124179]: Connection closed by 192.168.122.30 port 56220
Jan 26 09:47:52 compute-0 sshd-session[124176]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:47:52 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 26 09:47:52 compute-0 systemd[1]: session-43.scope: Consumed 1.442s CPU time.
Jan 26 09:47:52 compute-0 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Jan 26 09:47:52 compute-0 systemd-logind[787]: Removed session 43.
Jan 26 09:47:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:47:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:47:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:47:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:53 compute-0 ceph-mon[74456]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:47:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:54 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:54 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:54 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:54.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:54.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:47:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094754 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:47:55 compute-0 sudo[124592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:47:55 compute-0 sudo[124592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:47:55 compute-0 sudo[124592]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:55 compute-0 ceph-mon[74456]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:47:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:56 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f054400a250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:56 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:56 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:56.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:56.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:56] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:47:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:47:56] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:47:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:47:56.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:47:57 compute-0 sshd-session[124619]: Accepted publickey for zuul from 192.168.122.30 port 59292 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:47:57 compute-0 systemd-logind[787]: New session 44 of user zuul.
Jan 26 09:47:57 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 26 09:47:57 compute-0 sshd-session[124619]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:47:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:47:57 compute-0 ceph-mon[74456]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:58 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:58 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f054400a250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:47:58 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f054400a250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:47:58 compute-0 python3.9[124772]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:47:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:47:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:47:58.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:47:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:47:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:47:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:47:58.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:47:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:47:59 compute-0 sudo[124928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbxgexhmtgdhqgpxfkkzagbogdpozmjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420878.9378643-54-199705848139481/AnsiballZ_file.py'
Jan 26 09:47:59 compute-0 sudo[124928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:47:59 compute-0 python3.9[124930]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:47:59 compute-0 sudo[124928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:47:59 compute-0 ceph-mon[74456]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:00 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:00 compute-0 sudo[125105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sivhcxcarjmaraxfrdyzifesntytqqmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420879.8246222-78-236878529264506/AnsiballZ_stat.py'
Jan 26 09:48:00 compute-0 sudo[125105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:00 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f054400a250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:00 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:00 compute-0 python3.9[125107]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:00.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:00 compute-0 sudo[125105]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:00.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:00 compute-0 sudo[125183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voynbanyozweqyrjknzacouhadjppfbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420879.8246222-78-236878529264506/AnsiballZ_file.py'
Jan 26 09:48:00 compute-0 sudo[125183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:01 compute-0 python3.9[125185]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.yegs9ugn recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:01 compute-0 sudo[125183]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:01 compute-0 ceph-mon[74456]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:01 compute-0 sudo[125335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbaxdqknwfxrhftihsmjvoybfzbjrofg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420881.6752408-138-98746090191328/AnsiballZ_stat.py'
Jan 26 09:48:01 compute-0 sudo[125335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:02 compute-0 python3.9[125337]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:02 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:02 compute-0 sudo[125335]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:02 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:02 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f054400a250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:02 compute-0 sudo[125415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slabqpfrhgmwhqydeovpsdjznjqfiwnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420881.6752408-138-98746090191328/AnsiballZ_file.py'
Jan 26 09:48:02 compute-0 sudo[125415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:02.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:02 compute-0 python3.9[125417]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.52vrd7ti recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:02 compute-0 sudo[125415]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:03 compute-0 sudo[125567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udyildcpzspnlvcoxownbdxscmengrov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420882.8792262-177-105076917756833/AnsiballZ_file.py'
Jan 26 09:48:03 compute-0 sudo[125567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:03 compute-0 python3.9[125569]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:48:03 compute-0 sudo[125567]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094803 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:48:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:48:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:03 compute-0 ceph-mon[74456]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:03 compute-0 sudo[125719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uciexnqtrpmgtmlqzmjjpavpfaigqnas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420883.565767-201-25629269830429/AnsiballZ_stat.py'
Jan 26 09:48:03 compute-0 sudo[125719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:04 compute-0 python3.9[125721]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:04 compute-0 sudo[125719]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:04 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:04 compute-0 sudo[125799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggareyzaxrsnubxyesrunsqqdkibjvov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420883.565767-201-25629269830429/AnsiballZ_file.py'
Jan 26 09:48:04 compute-0 sudo[125799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:04 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:04 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:04 compute-0 python3.9[125801]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:48:04 compute-0 sudo[125799]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:04 compute-0 sudo[125951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqwrcjounvyjigyjzewrmaaigrtawxmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420884.6122706-201-15625086414409/AnsiballZ_stat.py'
Jan 26 09:48:04 compute-0 sudo[125951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:05 compute-0 python3.9[125953]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:05 compute-0 sudo[125951]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:05 compute-0 sudo[126029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iydtyaeomdfuvhemjehbgfvbbdwlzhve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420884.6122706-201-15625086414409/AnsiballZ_file.py'
Jan 26 09:48:05 compute-0 sudo[126029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:05 compute-0 python3.9[126031]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:48:05 compute-0 sudo[126029]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:05 compute-0 ceph-mon[74456]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:06 compute-0 sudo[126181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjsgvhyqmbecorftihvwfbbpbqxitgri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420885.7796776-270-71538574824784/AnsiballZ_file.py'
Jan 26 09:48:06 compute-0 sudo[126181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:06 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f054400a250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:06 compute-0 python3.9[126183]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:06 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f054400a250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:06 compute-0 sudo[126181]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:06 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:06.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:06.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:06] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:06] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:06 compute-0 sudo[126335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crwnxgunbdplomuatqpdardocfbeeuls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420886.5598059-294-51036672430265/AnsiballZ_stat.py'
Jan 26 09:48:06 compute-0 sudo[126335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:06.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:48:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:06.962Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:48:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:06.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:48:07 compute-0 python3.9[126337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:07 compute-0 sudo[126335]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:07 compute-0 sudo[126413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nphdpokrrkjccjqjjeaniucpwzehpngy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420886.5598059-294-51036672430265/AnsiballZ_file.py'
Jan 26 09:48:07 compute-0 sudo[126413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:07 compute-0 python3.9[126415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:07 compute-0 sudo[126413]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.687470) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420887687634, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1692, "num_deletes": 250, "total_data_size": 3597861, "memory_usage": 3631432, "flush_reason": "Manual Compaction"}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420887705285, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2272474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10808, "largest_seqno": 12499, "table_properties": {"data_size": 2266628, "index_size": 2981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14457, "raw_average_key_size": 20, "raw_value_size": 2253941, "raw_average_value_size": 3147, "num_data_blocks": 132, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420723, "oldest_key_time": 1769420723, "file_creation_time": 1769420887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17845 microseconds, and 9845 cpu microseconds.
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.705320) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2272474 bytes OK
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.705338) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.706802) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.706817) EVENT_LOG_v1 {"time_micros": 1769420887706812, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.706834) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3590855, prev total WAL file size 3590855, number of live WAL files 2.
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.707859) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2219KB)], [26(14MB)]
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420887707890, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 17079068, "oldest_snapshot_seqno": -1}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4324 keys, 15332396 bytes, temperature: kUnknown
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420887776816, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 15332396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15299310, "index_size": 21200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 109420, "raw_average_key_size": 25, "raw_value_size": 15216153, "raw_average_value_size": 3518, "num_data_blocks": 911, "num_entries": 4324, "num_filter_entries": 4324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769420887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.777007) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 15332396 bytes
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.778132) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.6 rd, 222.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 14.1 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(14.3) write-amplify(6.7) OK, records in: 4769, records dropped: 445 output_compression: NoCompression
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.778149) EVENT_LOG_v1 {"time_micros": 1769420887778140, "job": 10, "event": "compaction_finished", "compaction_time_micros": 68983, "compaction_time_cpu_micros": 28539, "output_level": 6, "num_output_files": 1, "total_output_size": 15332396, "num_input_records": 4769, "num_output_records": 4324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420887778723, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420887781061, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.707773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.781146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.781152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.781155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.781157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:07.781159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:07 compute-0 ceph-mon[74456]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:07 compute-0 sudo[126565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbtvshenhstrfreanvpmiyklsyorgxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420887.7169533-330-72437529192263/AnsiballZ_stat.py'
Jan 26 09:48:07 compute-0 sudo[126565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:08 compute-0 systemd[93053]: Created slice User Background Tasks Slice.
Jan 26 09:48:08 compute-0 systemd[93053]: Starting Cleanup of User's Temporary Files and Directories...
Jan 26 09:48:08 compute-0 systemd[93053]: Finished Cleanup of User's Temporary Files and Directories.
Jan 26 09:48:08 compute-0 python3.9[126567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:08 compute-0 sudo[126565]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:08 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05180032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:08 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:08 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:08 compute-0 sudo[126648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itkwhewlaxzhfcisjlyrrkamkxyjuxhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420887.7169533-330-72437529192263/AnsiballZ_file.py'
Jan 26 09:48:08 compute-0 sudo[126648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:08.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:08.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:08 compute-0 python3.9[126650]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:08 compute-0 sudo[126648]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:09 compute-0 sudo[126800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pefvmrqjfcynrujgxsjxnfakmligcohn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420888.8810751-366-198193373693422/AnsiballZ_systemd.py'
Jan 26 09:48:09 compute-0 sudo[126800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:09 compute-0 python3.9[126802]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:48:09 compute-0 systemd[1]: Reloading.
Jan 26 09:48:09 compute-0 systemd-rc-local-generator[126825]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:48:09 compute-0 systemd-sysv-generator[126829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:48:09 compute-0 ceph-mon[74456]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:10 compute-0 sudo[126800]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:10 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:10 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:10 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:10.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:10.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:10 compute-0 sudo[126992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klhfkwinxtamthrmxdfwbtxcivkgkgdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420890.3821132-390-18826960187752/AnsiballZ_stat.py'
Jan 26 09:48:10 compute-0 sudo[126992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:10 compute-0 python3.9[126994]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:10 compute-0 sudo[126995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:48:10 compute-0 sudo[126995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:10 compute-0 sudo[126995]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:10 compute-0 sudo[126992]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:10 compute-0 sudo[127022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:48:10 compute-0 sudo[127022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:11 compute-0 sudo[127120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjysstvaastuferxbbxernhjgxlbqizx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420890.3821132-390-18826960187752/AnsiballZ_file.py'
Jan 26 09:48:11 compute-0 sudo[127120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:11 compute-0 python3.9[127122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:11 compute-0 sudo[127120]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:11 compute-0 sudo[127022]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:48:11 compute-0 sudo[127178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:48:11 compute-0 sudo[127178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:11 compute-0 sudo[127178]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:11 compute-0 sudo[127226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:48:11 compute-0 sudo[127226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:11 compute-0 sudo[127373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwwtisaxcqudylojxumrpohfvrmgrdfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420891.5502176-426-239045629079695/AnsiballZ_stat.py'
Jan 26 09:48:11 compute-0 sudo[127373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:11 compute-0 ceph-mon[74456]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:48:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:48:11 compute-0 podman[127394]: 2026-01-26 09:48:11.974147662 +0000 UTC m=+0.039057069 container create f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gould, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:48:12 compute-0 systemd[1]: Started libpod-conmon-f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d.scope.
Jan 26 09:48:12 compute-0 python3.9[127380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:48:12 compute-0 podman[127394]: 2026-01-26 09:48:11.957766494 +0000 UTC m=+0.022675911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:48:12 compute-0 podman[127394]: 2026-01-26 09:48:12.0688592 +0000 UTC m=+0.133768617 container init f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:48:12 compute-0 sudo[127373]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:12 compute-0 podman[127394]: 2026-01-26 09:48:12.074820844 +0000 UTC m=+0.139730241 container start f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:48:12 compute-0 podman[127394]: 2026-01-26 09:48:12.07798721 +0000 UTC m=+0.142896707 container attach f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:48:12 compute-0 fervent_gould[127410]: 167 167
Jan 26 09:48:12 compute-0 systemd[1]: libpod-f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d.scope: Deactivated successfully.
Jan 26 09:48:12 compute-0 conmon[127410]: conmon f9b48d9c080ec1eeecf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d.scope/container/memory.events
Jan 26 09:48:12 compute-0 podman[127394]: 2026-01-26 09:48:12.081305821 +0000 UTC m=+0.146215218 container died f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-448584bb740369769cea59024711b7bf668bc3bd7546be1a0cf6b50e8c08eb93-merged.mount: Deactivated successfully.
Jan 26 09:48:12 compute-0 podman[127394]: 2026-01-26 09:48:12.11820456 +0000 UTC m=+0.183113957 container remove f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:48:12 compute-0 systemd[1]: libpod-conmon-f9b48d9c080ec1eeecf942a9d377f7173577abd689c6a1c74ab6e6968def241d.scope: Deactivated successfully.
Jan 26 09:48:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:12 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:12 compute-0 podman[127479]: 2026-01-26 09:48:12.269599678 +0000 UTC m=+0.039673886 container create 2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_sammet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:48:12 compute-0 systemd[1]: Started libpod-conmon-2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda.scope.
Jan 26 09:48:12 compute-0 sudo[127529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rogqrqeyofglyhhycwxdqvmginqclyuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420891.5502176-426-239045629079695/AnsiballZ_file.py'
Jan 26 09:48:12 compute-0 sudo[127529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:12 compute-0 podman[127479]: 2026-01-26 09:48:12.252171051 +0000 UTC m=+0.022245279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:48:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:12 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f255c27a13a3b79c9477ffa81b7741d74b7d3046acfda20fc720584a39a307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f255c27a13a3b79c9477ffa81b7741d74b7d3046acfda20fc720584a39a307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f255c27a13a3b79c9477ffa81b7741d74b7d3046acfda20fc720584a39a307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f255c27a13a3b79c9477ffa81b7741d74b7d3046acfda20fc720584a39a307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f255c27a13a3b79c9477ffa81b7741d74b7d3046acfda20fc720584a39a307/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:12 compute-0 podman[127479]: 2026-01-26 09:48:12.373136618 +0000 UTC m=+0.143210846 container init 2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_sammet, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:48:12 compute-0 podman[127479]: 2026-01-26 09:48:12.382494983 +0000 UTC m=+0.152569191 container start 2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_sammet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 09:48:12 compute-0 podman[127479]: 2026-01-26 09:48:12.385662911 +0000 UTC m=+0.155737119 container attach 2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:48:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:12 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:12 compute-0 python3.9[127533]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:12.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:12 compute-0 sudo[127529]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:12.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:12 compute-0 strange_sammet[127531]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:48:12 compute-0 strange_sammet[127531]: --> All data devices are unavailable
Jan 26 09:48:12 compute-0 podman[127479]: 2026-01-26 09:48:12.719695601 +0000 UTC m=+0.489769809 container died 2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:48:12 compute-0 systemd[1]: libpod-2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda.scope: Deactivated successfully.
Jan 26 09:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f255c27a13a3b79c9477ffa81b7741d74b7d3046acfda20fc720584a39a307-merged.mount: Deactivated successfully.
Jan 26 09:48:12 compute-0 podman[127479]: 2026-01-26 09:48:12.764436414 +0000 UTC m=+0.534510622 container remove 2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:48:12 compute-0 systemd[1]: libpod-conmon-2791e0252ef3e590cf16fa1699cfad110a304675510b3cb9c1611bcb63bc5eda.scope: Deactivated successfully.
Jan 26 09:48:12 compute-0 sudo[127226]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:12 compute-0 sudo[127655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:48:12 compute-0 sudo[127655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:12 compute-0 sudo[127655]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:12 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:48:12 compute-0 sudo[127683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:48:12 compute-0 sudo[127683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:12 compute-0 sudo[127758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tctavlfdmrbcfgyknjkwiwevfuqhmicg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420892.6898363-462-171454946058947/AnsiballZ_systemd.py'
Jan 26 09:48:12 compute-0 sudo[127758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:13 compute-0 python3.9[127760]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:48:13 compute-0 systemd[1]: Reloading.
Jan 26 09:48:13 compute-0 podman[127801]: 2026-01-26 09:48:13.279234476 +0000 UTC m=+0.037909577 container create 07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Jan 26 09:48:13 compute-0 systemd-sysv-generator[127852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:48:13 compute-0 systemd-rc-local-generator[127848]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:48:13 compute-0 podman[127801]: 2026-01-26 09:48:13.263798524 +0000 UTC m=+0.022473635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:48:13 compute-0 systemd[1]: Started libpod-conmon-07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779.scope.
Jan 26 09:48:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:48:13 compute-0 systemd[1]: Starting Create netns directory...
Jan 26 09:48:13 compute-0 podman[127801]: 2026-01-26 09:48:13.618794397 +0000 UTC m=+0.377469588 container init 07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:48:13 compute-0 podman[127801]: 2026-01-26 09:48:13.634781255 +0000 UTC m=+0.393456356 container start 07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:48:13 compute-0 podman[127801]: 2026-01-26 09:48:13.640588974 +0000 UTC m=+0.399264165 container attach 07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 09:48:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 09:48:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 09:48:13 compute-0 systemd[1]: Finished Create netns directory.
Jan 26 09:48:13 compute-0 unruffled_booth[127855]: 167 167
Jan 26 09:48:13 compute-0 systemd[1]: libpod-07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779.scope: Deactivated successfully.
Jan 26 09:48:13 compute-0 podman[127801]: 2026-01-26 09:48:13.648004916 +0000 UTC m=+0.406680047 container died 07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_booth, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 09:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-05afb1ddcbd92336b1501de1da86e26a96920f59f061d4bf188837078548a25d-merged.mount: Deactivated successfully.
Jan 26 09:48:13 compute-0 podman[127801]: 2026-01-26 09:48:13.689246864 +0000 UTC m=+0.447921975 container remove 07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_booth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:48:13 compute-0 sudo[127758]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:13 compute-0 systemd[1]: libpod-conmon-07b85a168803d3bd2bfb8a64dbc95d121ad82720b8a51938499ad8974aa95779.scope: Deactivated successfully.
Jan 26 09:48:13 compute-0 podman[127907]: 2026-01-26 09:48:13.861250715 +0000 UTC m=+0.043649914 container create b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:48:13 compute-0 ceph-mon[74456]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:13 compute-0 systemd[1]: Started libpod-conmon-b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4.scope.
Jan 26 09:48:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684d91080985d261ff010d634edc1d3af5fdf9183ebe085ddd3915f02f3cf171/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684d91080985d261ff010d634edc1d3af5fdf9183ebe085ddd3915f02f3cf171/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684d91080985d261ff010d634edc1d3af5fdf9183ebe085ddd3915f02f3cf171/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684d91080985d261ff010d634edc1d3af5fdf9183ebe085ddd3915f02f3cf171/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:13 compute-0 podman[127907]: 2026-01-26 09:48:13.84169488 +0000 UTC m=+0.024094079 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:48:13 compute-0 podman[127907]: 2026-01-26 09:48:13.952308244 +0000 UTC m=+0.134707533 container init b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mahavira, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:48:13 compute-0 podman[127907]: 2026-01-26 09:48:13.958877354 +0000 UTC m=+0.141276553 container start b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mahavira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:48:13 compute-0 podman[127907]: 2026-01-26 09:48:13.96203876 +0000 UTC m=+0.144437989 container attach b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mahavira, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:48:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:14 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:14 compute-0 great_mahavira[127946]: {
Jan 26 09:48:14 compute-0 great_mahavira[127946]:     "0": [
Jan 26 09:48:14 compute-0 great_mahavira[127946]:         {
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "devices": [
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "/dev/loop3"
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             ],
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "lv_name": "ceph_lv0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "lv_size": "21470642176",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "name": "ceph_lv0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "tags": {
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.cluster_name": "ceph",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.crush_device_class": "",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.encrypted": "0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.osd_id": "0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.type": "block",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.vdo": "0",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:                 "ceph.with_tpm": "0"
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             },
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "type": "block",
Jan 26 09:48:14 compute-0 great_mahavira[127946]:             "vg_name": "ceph_vg0"
Jan 26 09:48:14 compute-0 great_mahavira[127946]:         }
Jan 26 09:48:14 compute-0 great_mahavira[127946]:     ]
Jan 26 09:48:14 compute-0 great_mahavira[127946]: }
Jan 26 09:48:14 compute-0 systemd[1]: libpod-b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4.scope: Deactivated successfully.
Jan 26 09:48:14 compute-0 podman[127907]: 2026-01-26 09:48:14.270241785 +0000 UTC m=+0.452640984 container died b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mahavira, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:48:14 compute-0 podman[127907]: 2026-01-26 09:48:14.312415518 +0000 UTC m=+0.494814717 container remove b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 09:48:14 compute-0 systemd[1]: libpod-conmon-b9f4df7a068f20b79873d61925c81b338609ec58e299cb0ba18f6a5e4828fea4.scope: Deactivated successfully.
Jan 26 09:48:14 compute-0 sudo[127683]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:14 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:14 compute-0 sudo[128072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:48:14 compute-0 sudo[128072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:14 compute-0 sudo[128072]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:14 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:14 compute-0 sudo[128097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:48:14 compute-0 sudo[128097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:14.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:14 compute-0 python3.9[128071]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-684d91080985d261ff010d634edc1d3af5fdf9183ebe085ddd3915f02f3cf171-merged.mount: Deactivated successfully.
Jan 26 09:48:14 compute-0 network[128138]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:48:14 compute-0 network[128139]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:48:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:14.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:14 compute-0 network[128140]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:48:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:48:14 compute-0 podman[128184]: 2026-01-26 09:48:14.903234558 +0000 UTC m=+0.048539458 container create 239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:48:14 compute-0 podman[128184]: 2026-01-26 09:48:14.878356408 +0000 UTC m=+0.023661328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:48:15 compute-0 systemd[1]: Started libpod-conmon-239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab.scope.
Jan 26 09:48:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:48:15 compute-0 podman[128184]: 2026-01-26 09:48:15.36573302 +0000 UTC m=+0.511037930 container init 239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_noyce, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:48:15 compute-0 podman[128184]: 2026-01-26 09:48:15.373992996 +0000 UTC m=+0.519297896 container start 239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:48:15 compute-0 podman[128184]: 2026-01-26 09:48:15.377880782 +0000 UTC m=+0.523185702 container attach 239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:48:15 compute-0 crazy_noyce[128201]: 167 167
Jan 26 09:48:15 compute-0 systemd[1]: libpod-239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab.scope: Deactivated successfully.
Jan 26 09:48:15 compute-0 conmon[128201]: conmon 239427a13a6126d41282 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab.scope/container/memory.events
Jan 26 09:48:15 compute-0 podman[128184]: 2026-01-26 09:48:15.38218559 +0000 UTC m=+0.527490550 container died 239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 09:48:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb9dfa2a74d6cce13687cfc9cff54c5f8273acf772632c813a95087879f8392f-merged.mount: Deactivated successfully.
Jan 26 09:48:15 compute-0 podman[128184]: 2026-01-26 09:48:15.423593181 +0000 UTC m=+0.568898081 container remove 239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_noyce, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:48:15 compute-0 systemd[1]: libpod-conmon-239427a13a6126d41282931cd313186e5b6fc3358d2eaa4b3464dabfdf5db2ab.scope: Deactivated successfully.
Jan 26 09:48:15 compute-0 podman[128236]: 2026-01-26 09:48:15.576413599 +0000 UTC m=+0.043751777 container create 1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:48:15 compute-0 systemd[1]: Started libpod-conmon-1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1.scope.
Jan 26 09:48:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aaa03f4267dc5919ddcf8ae0e9dcff92d04fd1f4e8f717a383d7bffa4833e15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aaa03f4267dc5919ddcf8ae0e9dcff92d04fd1f4e8f717a383d7bffa4833e15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aaa03f4267dc5919ddcf8ae0e9dcff92d04fd1f4e8f717a383d7bffa4833e15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aaa03f4267dc5919ddcf8ae0e9dcff92d04fd1f4e8f717a383d7bffa4833e15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:48:15 compute-0 podman[128236]: 2026-01-26 09:48:15.650878045 +0000 UTC m=+0.118216243 container init 1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 09:48:15 compute-0 sudo[128260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:48:15 compute-0 podman[128236]: 2026-01-26 09:48:15.559591159 +0000 UTC m=+0.026929367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:48:15 compute-0 podman[128236]: 2026-01-26 09:48:15.659052237 +0000 UTC m=+0.126390415 container start 1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_tesla, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:48:15 compute-0 sudo[128260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:15 compute-0 podman[128236]: 2026-01-26 09:48:15.662458741 +0000 UTC m=+0.129796909 container attach 1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 09:48:15 compute-0 sudo[128260]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:15 compute-0 ceph-mon[74456]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:48:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:15 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:48:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:15 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:48:16 compute-0 lvm[128392]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:48:16 compute-0 lvm[128392]: VG ceph_vg0 finished
Jan 26 09:48:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:16 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:16 compute-0 gracious_tesla[128257]: {}
Jan 26 09:48:16 compute-0 podman[128236]: 2026-01-26 09:48:16.298066685 +0000 UTC m=+0.765404863 container died 1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 09:48:16 compute-0 systemd[1]: libpod-1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1.scope: Deactivated successfully.
Jan 26 09:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aaa03f4267dc5919ddcf8ae0e9dcff92d04fd1f4e8f717a383d7bffa4833e15-merged.mount: Deactivated successfully.
Jan 26 09:48:16 compute-0 podman[128236]: 2026-01-26 09:48:16.345369179 +0000 UTC m=+0.812707367 container remove 1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:48:16 compute-0 systemd[1]: libpod-conmon-1d6207cacfc1f83f18f9e7e96fe16912b416ad04603afa3a54d63caeccacc8f1.scope: Deactivated successfully.
Jan 26 09:48:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:16 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:16 compute-0 sudo[128097]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:48:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:48:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:16 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:16 compute-0 sudo[128422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:48:16 compute-0 sudo[128422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:16 compute-0 sudo[128422]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:16.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:16] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:16] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:16.963Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:48:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:16.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:48:17 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:17 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:48:17 compute-0 ceph-mon[74456]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:18 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05100016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:18 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:18 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300029b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:48:18
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['volumes', 'backups', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'images', '.mgr', 'vms', '.rgw.root', 'default.rgw.control']
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:48:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:18.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:48:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:48:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:48:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:48:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:18 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:48:19 compute-0 ceph-mon[74456]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:20.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:20.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:48:21 compute-0 ceph-mon[74456]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:48:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:22 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300029b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:22 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:22 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:22 compute-0 sudo[128641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwpabflplopuedmazinxhjdxeavkneyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420902.095467-540-84890807580885/AnsiballZ_stat.py'
Jan 26 09:48:22 compute-0 sudo[128641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:22.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:22.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:48:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:22 compute-0 python3.9[128643]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:22 compute-0 sudo[128641]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:23 compute-0 sudo[128719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzomehkqtkprhmbswhwxybdxbkzncqev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420902.095467-540-84890807580885/AnsiballZ_file.py'
Jan 26 09:48:23 compute-0 sudo[128719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:23 compute-0 python3.9[128721]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:23 compute-0 sudo[128719]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:23 compute-0 ceph-mon[74456]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:48:23 compute-0 sudo[128873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcgkhpadwevizmgkisefethgnnvtlzgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420903.540937-579-101147081644672/AnsiballZ_file.py'
Jan 26 09:48:23 compute-0 sudo[128873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:24 compute-0 python3.9[128875]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:24 compute-0 sudo[128873]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:24 compute-0 sshd-session[128854]: Invalid user test from 157.245.76.178 port 36658
Jan 26 09:48:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:24 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:24 compute-0 sshd-session[128854]: Connection closed by invalid user test 157.245.76.178 port 36658 [preauth]
Jan 26 09:48:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:24 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300029b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:24 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:24.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:48:24 compute-0 sudo[129027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mceazuvpgvefsnlpvpjeyktjclhugdus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420904.2998154-603-40989268301370/AnsiballZ_stat.py'
Jan 26 09:48:24 compute-0 sudo[129027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:24 compute-0 python3.9[129029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:24 compute-0 sudo[129027]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:25 compute-0 sudo[129105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zopyultrspxxvtvmfduustfibwolifdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420904.2998154-603-40989268301370/AnsiballZ_file.py'
Jan 26 09:48:25 compute-0 sudo[129105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:25 compute-0 python3.9[129107]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:25 compute-0 sudo[129105]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094825 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:48:25 compute-0 ceph-mon[74456]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:48:26 compute-0 sudo[129257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwzsaponfustzbugfxvxcpltnwahzbyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420905.7884605-648-141818067438096/AnsiballZ_timezone.py'
Jan 26 09:48:26 compute-0 sudo[129257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:26 compute-0 python3.9[129259]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 09:48:26 compute-0 systemd[1]: Starting Time & Date Service...
Jan 26 09:48:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:26 compute-0 systemd[1]: Started Time & Date Service.
Jan 26 09:48:26 compute-0 sudo[129257]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:26.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:26] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:26] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:26.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:48:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 09:48:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 2666 writes, 12K keys, 2666 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2666 writes, 2666 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2666 writes, 12K keys, 2666 commit groups, 1.0 writes per commit group, ingest: 22.96 MB, 0.04 MB/s
                                           Interval WAL: 2666 writes, 2666 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.9      0.39              0.05         5    0.079       0      0       0.0       0.0
                                             L6      1/0   14.62 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6    194.9    177.1      0.28              0.10         4    0.071     16K   1791       0.0       0.0
                                            Sum      1/0   14.62 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     81.7    103.2      0.68              0.15         9    0.075     16K   1791       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6    132.0    166.6      0.42              0.15         8    0.053     16K   1791       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    194.9    177.1      0.28              0.10         4    0.071     16K   1791       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    144.6      0.14              0.05         4    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.019, interval 0.019
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a9cd69b350#2 capacity: 304.00 MB usage: 2.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(147,1.89 MB,0.62167%) FilterBlock(10,54.61 KB,0.0175426%) IndexBlock(10,118.50 KB,0.0380667%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 09:48:27 compute-0 sudo[129415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcjmomsscsnqqeeqzypqigmatvabddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420906.896847-675-222142658617036/AnsiballZ_file.py'
Jan 26 09:48:27 compute-0 sudo[129415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:27 compute-0 python3.9[129417]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:27 compute-0 sudo[129415]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:27 compute-0 sudo[129567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbwvazhtvwmakueeluzbbpyctenkuun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420907.624142-699-193923363276201/AnsiballZ_stat.py'
Jan 26 09:48:27 compute-0 sudo[129567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:27 compute-0 ceph-mon[74456]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:28 compute-0 python3.9[129569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:28 compute-0 sudo[129567]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:28 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:28 compute-0 sudo[129647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqxjtyrxvhxpjfqccnsvgqzdhplsvrmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420907.624142-699-193923363276201/AnsiballZ_file.py'
Jan 26 09:48:28 compute-0 sudo[129647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:28 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:28 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:28 compute-0 python3.9[129649]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:28 compute-0 sudo[129647]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:28.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:29 compute-0 sudo[129799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyzbbtuqmnnzuzotgtjssdgxrscmskdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420908.855499-735-51471839778819/AnsiballZ_stat.py'
Jan 26 09:48:29 compute-0 sudo[129799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:29 compute-0 python3.9[129801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:29 compute-0 sudo[129799]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:29 compute-0 sudo[129877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjmbtmcksyqrounnhmrcgxgwravlmsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420908.855499-735-51471839778819/AnsiballZ_file.py'
Jan 26 09:48:29 compute-0 sudo[129877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:29 compute-0 python3.9[129879]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.k_7b7wwf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:29 compute-0 sudo[129877]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:30 compute-0 ceph-mon[74456]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:30 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:30 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:30 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:30 compute-0 sudo[130031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvrlmgcnjyijsxhbzuxgsrmtuexmkslw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420910.0318542-771-45692242923471/AnsiballZ_stat.py'
Jan 26 09:48:30 compute-0 sudo[130031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:30.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:30.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:30 compute-0 python3.9[130033]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:30 compute-0 sudo[130031]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:30 compute-0 sudo[130109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqesxxsesdzbcgwrgoqoghfwgnngacqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420910.0318542-771-45692242923471/AnsiballZ_file.py'
Jan 26 09:48:30 compute-0 sudo[130109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:31 compute-0 python3.9[130111]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:31 compute-0 sudo[130109]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:32 compute-0 sudo[130261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bimreiszmrhtpvccodwvdszapkxasgfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420911.5958939-810-115465386384239/AnsiballZ_command.py'
Jan 26 09:48:32 compute-0 sudo[130261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:32 compute-0 ceph-mon[74456]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:48:32 compute-0 python3.9[130263]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:48:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:32 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:32 compute-0 sudo[130261]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:32 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:32 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:32.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:48:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:32 compute-0 sudo[130416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmsnkycrpwdbkotfawejioecgiangspw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769420912.489777-834-17131823702180/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 09:48:32 compute-0 sudo[130416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094832 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:48:33 compute-0 python3[130418]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 09:48:33 compute-0 sudo[130416]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:33 compute-0 ceph-mon[74456]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:48:33 compute-0 sudo[130568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-totchqyhifagsemcmvrqinlglaerwcql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420913.3669488-858-163670060099412/AnsiballZ_stat.py'
Jan 26 09:48:33 compute-0 sudo[130568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:48:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:33 compute-0 python3.9[130570]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:33 compute-0 sudo[130568]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:34 compute-0 sudo[130646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddpltjznywcbqawyftigbrjxogrtsywx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420913.3669488-858-163670060099412/AnsiballZ_file.py'
Jan 26 09:48:34 compute-0 sudo[130646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:34 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:34 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:34 compute-0 python3.9[130648]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:34 compute-0 sudo[130646]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:34 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:34.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:48:34 compute-0 sudo[130800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xccyourygblosbmygiqpntemwvbpxfwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420914.6512654-894-184359333564172/AnsiballZ_stat.py'
Jan 26 09:48:34 compute-0 sudo[130800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:35 compute-0 python3.9[130802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:35 compute-0 sudo[130800]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:35 compute-0 ceph-mon[74456]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:48:35 compute-0 sudo[130925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjwencbkwcvlkcfoufqzvvjgnvegeoax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420914.6512654-894-184359333564172/AnsiballZ_copy.py'
Jan 26 09:48:35 compute-0 sudo[130925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:35 compute-0 sudo[130928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:48:35 compute-0 sudo[130928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:35 compute-0 sudo[130928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:35 compute-0 python3.9[130927]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420914.6512654-894-184359333564172/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:35 compute-0 sudo[130925]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:36 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f051c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:36 compute-0 sudo[131104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwsmfywgedlnxwfowrsjqcymatcanxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420916.0441144-939-264611591720969/AnsiballZ_stat.py'
Jan 26 09:48:36 compute-0 sudo[131104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:36 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:36 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:36 compute-0 python3.9[131106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:36.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:36 compute-0 sudo[131104]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:36.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:36] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Jan 26 09:48:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:36] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Jan 26 09:48:36 compute-0 sudo[131182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgellmlqvacpdptrfipmpcdxtmuyhira ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420916.0441144-939-264611591720969/AnsiballZ_file.py'
Jan 26 09:48:36 compute-0 sudo[131182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:36.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:48:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:36.967Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:48:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:36.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:48:36 compute-0 python3.9[131184]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:37 compute-0 sudo[131182]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:37 compute-0 sudo[131334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frvggwqjuyzueretgnqmjvoictkufvnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420917.3341582-975-142287067587155/AnsiballZ_stat.py'
Jan 26 09:48:37 compute-0 sudo[131334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:37 compute-0 ceph-mon[74456]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:37 compute-0 python3.9[131336]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:37 compute-0 sudo[131334]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:38 compute-0 sudo[131412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwzpslwbguzwqjgffsedrrjsallgjakc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420917.3341582-975-142287067587155/AnsiballZ_file.py'
Jan 26 09:48:38 compute-0 sudo[131412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:38 compute-0 python3.9[131414]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:38 compute-0 sudo[131412]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:38 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:38.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:38 compute-0 sudo[131566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkpjyomklpzbwxkkcfhjmqcfkmcemusb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420918.5774302-1011-7673936562025/AnsiballZ_stat.py'
Jan 26 09:48:38 compute-0 sudo[131566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:39 compute-0 python3.9[131568]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:39 compute-0 sudo[131566]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:39 compute-0 sudo[131644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdekzwuxeluyysrcxcphvqitbqukhovt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420918.5774302-1011-7673936562025/AnsiballZ_file.py'
Jan 26 09:48:39 compute-0 sudo[131644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:39 compute-0 python3.9[131646]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:39 compute-0 sudo[131644]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:39 compute-0 ceph-mon[74456]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:48:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:40 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:40 compute-0 sudo[131800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxumzqikjtogugpmzcxqqmeowiisuafu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420919.9755518-1050-153639029316541/AnsiballZ_command.py'
Jan 26 09:48:40 compute-0 sudo[131800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:40 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:40 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518001b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:40 compute-0 python3.9[131802]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:48:40 compute-0 sudo[131800]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:40.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:41 compute-0 sudo[131955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peopvqlphjrcdzjxtdvvryvgtphlsntm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420920.6916096-1074-223883159635103/AnsiballZ_blockinfile.py'
Jan 26 09:48:41 compute-0 sudo[131955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:41 compute-0 python3.9[131957]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:41 compute-0 sudo[131955]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:41 compute-0 ceph-mon[74456]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:42 compute-0 sudo[132107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glnofokkgqjzxefeigweqnzabdjmqmhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420921.8537152-1101-104724833920292/AnsiballZ_file.py'
Jan 26 09:48:42 compute-0 sudo[132107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:42 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:42 compute-0 python3.9[132109]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:42 compute-0 sudo[132107]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:42 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05440089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:42 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:42 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:48:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:42.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:42 compute-0 sudo[132261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnelcbbhewmlfbvagisxvbmpiqafqdqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420922.4186516-1101-101070367247051/AnsiballZ_file.py'
Jan 26 09:48:42 compute-0 sudo[132261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:42 compute-0 python3.9[132263]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:42 compute-0 sudo[132261]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:43 compute-0 ceph-mon[74456]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:48:44 compute-0 sudo[132413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymiguqsrizusunifrzioxyqtpbxdggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420923.6463869-1146-153741669615976/AnsiballZ_mount.py'
Jan 26 09:48:44 compute-0 sudo[132413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:44 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518001b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:44 compute-0 python3.9[132415]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 09:48:44 compute-0 sudo[132413]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:44 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:44 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:44.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:44.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:44 compute-0 sudo[132568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zccrwltzicodaaiegrybqtmxgqlkyefi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420924.4282253-1146-102399965902752/AnsiballZ_mount.py'
Jan 26 09:48:44 compute-0 sudo[132568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:44 compute-0 python3.9[132570]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 09:48:44 compute-0 sudo[132568]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:45 compute-0 sshd-session[124622]: Connection closed by 192.168.122.30 port 59292
Jan 26 09:48:45 compute-0 sshd-session[124619]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:48:45 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 26 09:48:45 compute-0 systemd[1]: session-44.scope: Consumed 28.654s CPU time.
Jan 26 09:48:45 compute-0 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Jan 26 09:48:45 compute-0 systemd-logind[787]: Removed session 44.
Jan 26 09:48:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:45 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:48:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:45 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:48:45 compute-0 ceph-mon[74456]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:46 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:46] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Jan 26 09:48:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:46] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Jan 26 09:48:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:46.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:48:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:47 compute-0 ceph-mon[74456]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:48 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518001b80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:48 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:48 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:48.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:48 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:48:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:48:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:48:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:48:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:48:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:48:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:48:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:48:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:48:49 compute-0 ceph-mon[74456]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:48:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:50 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:50 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:50 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.003000081s ======
Jan 26 09:48:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:50.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Jan 26 09:48:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:48:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:50.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:51 compute-0 sshd-session[132601]: Accepted publickey for zuul from 192.168.122.30 port 58128 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:48:51 compute-0 systemd-logind[787]: New session 45 of user zuul.
Jan 26 09:48:51 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 26 09:48:51 compute-0 sshd-session[132601]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:48:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094851 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:48:51 compute-0 sudo[132754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkoicgthpmrsbtafkttohotsobzonsok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420931.1963637-18-257015216676352/AnsiballZ_tempfile.py'
Jan 26 09:48:51 compute-0 sudo[132754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:51 compute-0 ceph-mon[74456]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:48:52 compute-0 python3.9[132756]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 26 09:48:52 compute-0 sudo[132754]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:52 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c002400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:52 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:52 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:48:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:52 compute-0 sudo[132908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scvwwecmztpjvnpxmzbhoxcoefvjrkxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420932.2882144-54-143150164608107/AnsiballZ_stat.py'
Jan 26 09:48:52 compute-0 sudo[132908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:52 compute-0 python3.9[132910]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:48:52 compute-0 sudo[132908]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:53 compute-0 sshd-session[71335]: Received disconnect from 38.102.83.222 port 53582:11: disconnected by user
Jan 26 09:48:53 compute-0 sshd-session[71335]: Disconnected from user zuul 38.102.83.222 port 53582
Jan 26 09:48:53 compute-0 sshd-session[71332]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:48:53 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 26 09:48:53 compute-0 systemd[1]: session-18.scope: Consumed 1min 34.909s CPU time.
Jan 26 09:48:53 compute-0 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Jan 26 09:48:53 compute-0 systemd-logind[787]: Removed session 18.
Jan 26 09:48:53 compute-0 sudo[133062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiycyfcqqufvfdrkzimuafqiamjcykyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420933.1488776-78-177883449091445/AnsiballZ_slurp.py'
Jan 26 09:48:53 compute-0 sudo[133062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:53 compute-0 ceph-mon[74456]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:48:53 compute-0 python3.9[133064]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 26 09:48:53 compute-0 sudo[133062]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:54 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:54 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c002400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:54 compute-0 sudo[133216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkuuflodkvssxhdmgkoqqkxkvdffymzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420934.1550064-102-235488582595750/AnsiballZ_stat.py'
Jan 26 09:48:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:54 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:54 compute-0 sudo[133216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:54 compute-0 python3.9[133218]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.i1q5aw_5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:48:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:48:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:54.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:54 compute-0 sudo[133216]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094855 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:48:55 compute-0 sudo[133341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gujxgxvgeohaejoshaqgzwygttiqclyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420934.1550064-102-235488582595750/AnsiballZ_copy.py'
Jan 26 09:48:55 compute-0 sudo[133341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:55 compute-0 python3.9[133343]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.i1q5aw_5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769420934.1550064-102-235488582595750/.source.i1q5aw_5 _original_basename=.hcmis1iz follow=False checksum=e638a8a1231bcbc6594aeda119d676a260ed9e9f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:55 compute-0 sudo[133341]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:55 compute-0 ceph-mon[74456]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:48:55 compute-0 sudo[133420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:48:55 compute-0 sudo[133420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:48:55 compute-0 sudo[133420]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:56 compute-0 sudo[133518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yomziazsfmysielmsmcmdvdpysskctib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420935.6206005-147-223610628856704/AnsiballZ_setup.py'
Jan 26 09:48:56 compute-0 sudo[133518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:56 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:56 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:56 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:56 compute-0 python3.9[133520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:48:56 compute-0 sudo[133518]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:56 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 09:48:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:48:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:56.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:48:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:56] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:48:56] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:48:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 26 09:48:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:56.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:56.970Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:48:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:48:56.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:48:57 compute-0 sudo[133674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdgjwnhcsxlydekjmspksjltjmlcifrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420936.8159397-172-49321433155281/AnsiballZ_blockinfile.py'
Jan 26 09:48:57 compute-0 sudo[133674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:57 compute-0 python3.9[133676]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0TZpcPGqQPKNdLKsJSWd1uRV3wOVDiIo3gYwVWAuH5m+Wvpw34ZI+6+d4y3DWMqDRZVWAVV0NNFB+b4MQeivx4S7KMCvBctzJ6VIyUDL5NZrwys0sYPH+33ncdZd6C8LrfCvIct+DbWCx72RQ+G0yRbYK1r/m5+dzW2411NqWn8kJkBUeLJIqT2vhFoNpO8NaWSVlWEgl5YunYEPS4v5NSM88ke6Gzc5X5sjxsz65REj6/1BXsA+quwcTAe/KC1/1Rr2cufefwf0uayM6sGuUDATjWIw36YqUeL9wc/IDdIEFEvj2hr/v+r6laaKMidOYJXBiQwIWpgWCOosSj4vrPQmDfqjOa8sAn7yWPVgxyARccavEO89zV2lpFcYTdqegPxjB90lD3Q1pMU6veJUWTRo0LAZ6n9rsRBgF0Mhr75T32Lbqf3KBro6/nPrp1XCD08mNv2cEYwp+put7vwvHzN1nPztqMsIDAMJMupwI+Buyr3xCPHe3hcAavahF+YM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbUUMKlV4hksqDn2YVVAHPCHip80h7zj0rReM94Ja2l
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFtD30BOt1BlR6BYm8DU7sxF5fAzZ/aciKetiRsXWlbsXS3Z4mVG1ZAF9AhArV+OaapsLeaQFybIC0e2fudJfos=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyi0WEBS9Gc5Xay4vqFSdv0cJGdtezg+CrNF/vjEeF3l4EhpAAj7XRLEhEU1kz0DDKkzclG65hBNPO4/9cfzEa31EsSmzOqjqZp5ri20HVDkiZlUTTklhrbJGydUw6mcy+rIN1qsUugVHwkA9ufZLvzm9wvljzL+WPt1o41GT42NdNzyfPfnqf7HMDziNUNUUZjqsoy+DQnlMl3c3NHiGysPJ6IssbLBCFzPdBHpEYmR8b44qlJEhx3RYWl3QLcXAyoK7VpPdFO4ltMT+0KVVbLO9IUrocCQ4HfafPn/mV1Rq3phDWvCTRfRo07Mu4Oc4XBu+RIk9tt1WTIdT/ZusPUNSkFgprdU9zFIHLR0KyIX4qRSuWBeB20Ic5pvkRvNtwLB8lPt4NVi7bmun6moO8nu6cOjJ61CCAobDSEL/Z2cG3ADucjCSKtWLM0eSdt6T71NmULMhdB8ljIK4em/NCf/qZWjYr70WKyIZ9b8N5lDO8NF1tbPJyu+O0ebq/JN8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAaib//yQ1QyvWijjfui4OBtTtMt7Dos+hlx8rucs2Tn
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP7YXsQWyEQWSdy5tcEAtltn11CwuaqW/S8S3OB1580hTlcLZWLPDHbzSwNDf13HBG9wgLFgmueLB8U6J7wvvcM=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDm+Vrn31pimz+Of4pkRaSS+qazCMrOF2INZ0EZsyoNG5922K2xwdC9F6r4k2L54HPEpDiazPoDsOHQvs1I+CvayNM2D+8hZhvqxZOMimP8b056aM14nht9ADrJUnlaDs57FkgIKQdxma9I0sW8Up3bbLchFOj2grOjH7gRdUBxblzIS01/P5NV8/kPsRXDoCgx+QAxU2nEqyCQd0JXLKoy+v6t+pG7We9wFXXr2z4XmAx7yeU0Y6NsJ1Seies0apLTmfK3HAtj/3LObvZegqVGDFtl5spotTmJdPJUCZhniaUmyYZ4jtIEno86Bf8OhS3NvLsxmNXuJcInlmCHGXDP9FPBrxG+yVB63FUAeyejCXntEyOzXFp8fiCuOVQuqDTWB4UxTRYh3EqVruxhY1taarew/VfsxIAxv6BWsqtvh/6xtRtJ9vTSDHsDTRaOcChfT5BnATFJ+Ilwpve8C4bjRVdlStH+99TgtNPOg2Fxf8scyIHInM9c4Yn7g8YTiyk=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICrJdFptF1rp2hjeKcc0nSEhHvDtAYFU4gfqZN6U+WTb
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNa2lKVjuYCljd0rl1qDkTP3ZoTV9fkbcXvtxSizwygrF6dU+RWdeB3LOkT5U/2GTJuWvOqxJBc3Y1d0b3Dj5Do=
                                              create=True mode=0644 path=/tmp/ansible.i1q5aw_5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:57 compute-0 sudo[133674]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:48:57 compute-0 ceph-mon[74456]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.847070) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420937847237, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 683, "num_deletes": 251, "total_data_size": 963158, "memory_usage": 975704, "flush_reason": "Manual Compaction"}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420937856620, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 953382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12500, "largest_seqno": 13182, "table_properties": {"data_size": 949852, "index_size": 1374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7899, "raw_average_key_size": 18, "raw_value_size": 942764, "raw_average_value_size": 2239, "num_data_blocks": 61, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420887, "oldest_key_time": 1769420887, "file_creation_time": 1769420937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9584 microseconds, and 4982 cpu microseconds.
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.856670) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 953382 bytes OK
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.856696) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.858145) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.858161) EVENT_LOG_v1 {"time_micros": 1769420937858157, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.858186) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 959662, prev total WAL file size 959662, number of live WAL files 2.
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.859366) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(931KB)], [29(14MB)]
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420937859456, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 16285778, "oldest_snapshot_seqno": -1}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4229 keys, 12878471 bytes, temperature: kUnknown
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420937924753, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12878471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12847225, "index_size": 19552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108309, "raw_average_key_size": 25, "raw_value_size": 12767000, "raw_average_value_size": 3018, "num_data_blocks": 828, "num_entries": 4229, "num_filter_entries": 4229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769420937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.925129) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12878471 bytes
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.926870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.0 rd, 196.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.6 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(30.6) write-amplify(13.5) OK, records in: 4745, records dropped: 516 output_compression: NoCompression
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.926902) EVENT_LOG_v1 {"time_micros": 1769420937926888, "job": 12, "event": "compaction_finished", "compaction_time_micros": 65411, "compaction_time_cpu_micros": 29905, "output_level": 6, "num_output_files": 1, "total_output_size": 12878471, "num_input_records": 4745, "num_output_records": 4229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420937927328, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769420937931554, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.859165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.931713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.931723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.931725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.931727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:57 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:48:57.931729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:48:58 compute-0 sudo[133826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvsrhasqidijycnurlbqopbrdggieccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420937.6524491-196-224798765892703/AnsiballZ_command.py'
Jan 26 09:48:58 compute-0 sudo[133826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:58 compute-0 python3.9[133828]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.i1q5aw_5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:48:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:58 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0530003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:58 compute-0 sudo[133826]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:58 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:48:58 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:48:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:48:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:48:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:48:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 26 09:48:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:48:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:48:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:48:58.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:48:58 compute-0 sudo[133982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kujjxzwfwcqpkglypewmpjkyyijmgghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420938.5265229-220-260686906700218/AnsiballZ_file.py'
Jan 26 09:48:58 compute-0 sudo[133982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:48:59 compute-0 python3.9[133984]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.i1q5aw_5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:48:59 compute-0 sudo[133982]: pam_unix(sudo:session): session closed for user root
Jan 26 09:48:59 compute-0 sshd-session[132604]: Connection closed by 192.168.122.30 port 58128
Jan 26 09:48:59 compute-0 sshd-session[132601]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:48:59 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 26 09:48:59 compute-0 systemd[1]: session-45.scope: Consumed 5.347s CPU time.
Jan 26 09:48:59 compute-0 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Jan 26 09:48:59 compute-0 systemd-logind[787]: Removed session 45.
Jan 26 09:48:59 compute-0 ceph-mon[74456]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 26 09:49:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:00 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:00 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:00 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:00 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:49:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:00.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:49:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:00.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:01 compute-0 ceph-mon[74456]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:49:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:02 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:02 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:02 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:03 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:49:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:03 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:49:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:49:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:03 compute-0 ceph-mon[74456]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:04 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:04 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:04 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:04.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:49:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:04 compute-0 sshd-session[134016]: Accepted publickey for zuul from 192.168.122.30 port 41924 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:49:04 compute-0 systemd-logind[787]: New session 46 of user zuul.
Jan 26 09:49:04 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 26 09:49:04 compute-0 sshd-session[134016]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:49:05 compute-0 python3.9[134169]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:49:06 compute-0 ceph-mon[74456]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:06 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:06 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:06 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:06.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:06] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Jan 26 09:49:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:06] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Jan 26 09:49:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:06.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:06 : epoch 69773828 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:49:06 compute-0 sudo[134325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwlbbovaukdvchrwnxdpiyqlirkthyjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420946.1861548-51-47045253390995/AnsiballZ_systemd.py'
Jan 26 09:49:06 compute-0 sudo[134325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:06.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:49:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:06.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:49:07 compute-0 python3.9[134327]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 09:49:07 compute-0 sudo[134325]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:07 compute-0 sudo[134482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aosqdwqudtawvrxbvucfidmpngurjnag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420947.4420764-75-12956745237227/AnsiballZ_systemd.py'
Jan 26 09:49:07 compute-0 sudo[134482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:07 compute-0 python3.9[134484]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:49:07 compute-0 sshd-session[134454]: Invalid user test from 157.245.76.178 port 56858
Jan 26 09:49:08 compute-0 sudo[134482]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:08 compute-0 sshd-session[134454]: Connection closed by invalid user test 157.245.76.178 port 56858 [preauth]
Jan 26 09:49:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:08 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:08 compute-0 ceph-mon[74456]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:08 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:08 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:08.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:08.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:08 compute-0 sudo[134637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyfkbvflyweesnfswdtcgihjylhlbeot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420948.4588182-102-194892732520930/AnsiballZ_command.py'
Jan 26 09:49:08 compute-0 sudo[134637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:09 compute-0 python3.9[134639]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:49:09 compute-0 sudo[134637]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:09 compute-0 ceph-mon[74456]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:09 compute-0 sudo[134790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzbrfteoinexfmouczdckgakqygwqkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420949.438269-126-209391370640787/AnsiballZ_stat.py'
Jan 26 09:49:09 compute-0 sudo[134790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:10 compute-0 python3.9[134792]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:49:10 compute-0 sudo[134790]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:10 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:10 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:10 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:10.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:49:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:10.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:10 compute-0 sudo[134944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niolqdtmowkiegzjngfgvzaijywxbgyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420950.3059876-153-70751820020026/AnsiballZ_file.py'
Jan 26 09:49:10 compute-0 sudo[134944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:10 compute-0 python3.9[134946]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:10 compute-0 sudo[134944]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:11 compute-0 sshd-session[134019]: Connection closed by 192.168.122.30 port 41924
Jan 26 09:49:11 compute-0 sshd-session[134016]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:49:11 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 26 09:49:11 compute-0 systemd[1]: session-46.scope: Consumed 3.726s CPU time.
Jan 26 09:49:11 compute-0 systemd-logind[787]: Session 46 logged out. Waiting for processes to exit.
Jan 26 09:49:11 compute-0 systemd-logind[787]: Removed session 46.
Jan 26 09:49:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094911 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:49:11 compute-0 ceph-mon[74456]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:49:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:12 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:12 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0518002c80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:12 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:12.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:13 compute-0 ceph-mon[74456]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:14 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:14 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:14 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:14.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:15 compute-0 ceph-mon[74456]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:49:15 compute-0 sudo[134977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:49:15 compute-0 sudo[134977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:15 compute-0 sudo[134977]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:16 compute-0 sshd-session[135002]: Accepted publickey for zuul from 192.168.122.30 port 34958 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:49:16 compute-0 systemd-logind[787]: New session 47 of user zuul.
Jan 26 09:49:16 compute-0 systemd[1]: Started Session 47 of User zuul.
Jan 26 09:49:16 compute-0 sshd-session[135002]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:49:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:16 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:16 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:16 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:16.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:16] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Jan 26 09:49:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:16] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Jan 26 09:49:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:16.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:16 compute-0 sudo[135081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:49:16 compute-0 sudo[135081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:16 compute-0 sudo[135081]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:16 compute-0 sudo[135127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 26 09:49:16 compute-0 sudo[135127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:16.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:49:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:16.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:49:17 compute-0 sudo[135127]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:49:17 compute-0 python3.9[135207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:49:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:49:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:17 compute-0 sudo[135232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:49:17 compute-0 sudo[135232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:17 compute-0 sudo[135232]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:17 compute-0 sudo[135257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:49:17 compute-0 sudo[135257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:17 compute-0 sudo[135257]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:49:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:49:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:49:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:49:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:49:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:49:17 compute-0 sudo[135462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjwjjgyllnbdqggpgoouonzrlcwtbvyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420957.7169468-57-203848282331633/AnsiballZ_setup.py'
Jan 26 09:49:17 compute-0 sudo[135462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:49:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:49:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:49:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:49:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:49:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:49:18 compute-0 sudo[135465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:49:18 compute-0 sudo[135465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:18 compute-0 sudo[135465]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:18 compute-0 ceph-mon[74456]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:49:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:49:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:18 compute-0 sudo[135490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:49:18 compute-0 sudo[135490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:18 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:18 compute-0 python3.9[135464]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:49:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:18 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:18 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:18 compute-0 sudo[135462]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:18 compute-0 podman[135563]: 2026-01-26 09:49:18.54110529 +0000 UTC m=+0.045825921 container create 74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:49:18
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'images', '.nfs', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control']
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:49:18 compute-0 systemd[1]: Started libpod-conmon-74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d.scope.
Jan 26 09:49:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:49:18 compute-0 podman[135563]: 2026-01-26 09:49:18.520534624 +0000 UTC m=+0.025255275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:49:18 compute-0 podman[135563]: 2026-01-26 09:49:18.629915341 +0000 UTC m=+0.134635982 container init 74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:49:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:18 compute-0 podman[135563]: 2026-01-26 09:49:18.636901473 +0000 UTC m=+0.141622104 container start 74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 09:49:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:18.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:18 compute-0 podman[135563]: 2026-01-26 09:49:18.639778802 +0000 UTC m=+0.144499453 container attach 74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:49:18 compute-0 affectionate_bell[135578]: 167 167
Jan 26 09:49:18 compute-0 systemd[1]: libpod-74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d.scope: Deactivated successfully.
Jan 26 09:49:18 compute-0 podman[135563]: 2026-01-26 09:49:18.644474741 +0000 UTC m=+0.149195362 container died 74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d65831298913c8f8643e3e15ce450ba9b0f203f6ae25fe9f3af1d6b19b0fe9cd-merged.mount: Deactivated successfully.
Jan 26 09:49:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:18.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:18 compute-0 podman[135563]: 2026-01-26 09:49:18.683972687 +0000 UTC m=+0.188693318 container remove 74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:49:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:49:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:18 compute-0 systemd[1]: libpod-conmon-74557b14a1b960250a72fc01b5108f6323b3fe08fe04d88c44e5483d0df0476d.scope: Deactivated successfully.
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:49:18 compute-0 podman[135625]: 2026-01-26 09:49:18.836405916 +0000 UTC m=+0.042309563 container create d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_booth, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:49:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:49:18 compute-0 systemd[1]: Started libpod-conmon-d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d.scope.
Jan 26 09:49:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e149fd85112cb63ccd158870392c617dc8c992fb1d1d5fe6cf8f84df347e214f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e149fd85112cb63ccd158870392c617dc8c992fb1d1d5fe6cf8f84df347e214f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:18 compute-0 podman[135625]: 2026-01-26 09:49:18.817751744 +0000 UTC m=+0.023655391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e149fd85112cb63ccd158870392c617dc8c992fb1d1d5fe6cf8f84df347e214f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e149fd85112cb63ccd158870392c617dc8c992fb1d1d5fe6cf8f84df347e214f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e149fd85112cb63ccd158870392c617dc8c992fb1d1d5fe6cf8f84df347e214f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:18 compute-0 podman[135625]: 2026-01-26 09:49:18.932724904 +0000 UTC m=+0.138628551 container init d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:49:18 compute-0 podman[135625]: 2026-01-26 09:49:18.940149929 +0000 UTC m=+0.146053556 container start d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:49:18 compute-0 podman[135625]: 2026-01-26 09:49:18.945114715 +0000 UTC m=+0.151018352 container attach d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_booth, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:49:18 compute-0 sudo[135699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqubqdeulporehsodjpukthydivzkvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420957.7169468-57-203848282331633/AnsiballZ_dnf.py'
Jan 26 09:49:18 compute-0 sudo[135699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:49:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:49:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:49:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:19 compute-0 python3.9[135701]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 09:49:19 compute-0 optimistic_booth[135668]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:49:19 compute-0 optimistic_booth[135668]: --> All data devices are unavailable
Jan 26 09:49:19 compute-0 systemd[1]: libpod-d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d.scope: Deactivated successfully.
Jan 26 09:49:19 compute-0 podman[135625]: 2026-01-26 09:49:19.267497877 +0000 UTC m=+0.473401524 container died d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:49:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e149fd85112cb63ccd158870392c617dc8c992fb1d1d5fe6cf8f84df347e214f-merged.mount: Deactivated successfully.
Jan 26 09:49:19 compute-0 podman[135625]: 2026-01-26 09:49:19.311178918 +0000 UTC m=+0.517082555 container remove d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:49:19 compute-0 systemd[1]: libpod-conmon-d96915f01e225b9cc38f0e7db6c380d4318aa1f43d15bb7dd376c8f65cb9cc1d.scope: Deactivated successfully.
Jan 26 09:49:19 compute-0 sudo[135490]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:19 compute-0 sudo[135726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:49:19 compute-0 sudo[135726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:19 compute-0 sudo[135726]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:19 compute-0 sudo[135751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:49:19 compute-0 sudo[135751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:19 compute-0 podman[135816]: 2026-01-26 09:49:19.829797734 +0000 UTC m=+0.034258083 container create 1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:49:19 compute-0 systemd[1]: Started libpod-conmon-1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867.scope.
Jan 26 09:49:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:49:19 compute-0 podman[135816]: 2026-01-26 09:49:19.893549186 +0000 UTC m=+0.098009555 container init 1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:49:19 compute-0 podman[135816]: 2026-01-26 09:49:19.902253456 +0000 UTC m=+0.106713805 container start 1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_satoshi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 09:49:19 compute-0 podman[135816]: 2026-01-26 09:49:19.905248188 +0000 UTC m=+0.109708567 container attach 1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:49:19 compute-0 youthful_satoshi[135833]: 167 167
Jan 26 09:49:19 compute-0 systemd[1]: libpod-1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867.scope: Deactivated successfully.
Jan 26 09:49:19 compute-0 podman[135816]: 2026-01-26 09:49:19.910615286 +0000 UTC m=+0.115075645 container died 1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_satoshi, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 09:49:19 compute-0 podman[135816]: 2026-01-26 09:49:19.815431059 +0000 UTC m=+0.019891428 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:49:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-76320c65667728bbea2e62c0500c7c74fc38c2367bfe14a142d17b4369dec083-merged.mount: Deactivated successfully.
Jan 26 09:49:19 compute-0 podman[135816]: 2026-01-26 09:49:19.949418562 +0000 UTC m=+0.153878911 container remove 1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_satoshi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:49:19 compute-0 systemd[1]: libpod-conmon-1487f4d2ee75c34820b3ad07379b63026ad9efe59dff0fd8dd203296c8d90867.scope: Deactivated successfully.
Jan 26 09:49:20 compute-0 podman[135857]: 2026-01-26 09:49:20.128462754 +0000 UTC m=+0.057433540 container create 7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 09:49:20 compute-0 ceph-mon[74456]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:20 compute-0 systemd[1]: Started libpod-conmon-7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717.scope.
Jan 26 09:49:20 compute-0 podman[135857]: 2026-01-26 09:49:20.111293332 +0000 UTC m=+0.040264148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:49:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb18890fa28f3af2c90052579e60c0beb64bd7569fa00510f24e2036d5f54d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb18890fa28f3af2c90052579e60c0beb64bd7569fa00510f24e2036d5f54d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb18890fa28f3af2c90052579e60c0beb64bd7569fa00510f24e2036d5f54d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb18890fa28f3af2c90052579e60c0beb64bd7569fa00510f24e2036d5f54d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:20 compute-0 podman[135857]: 2026-01-26 09:49:20.261979055 +0000 UTC m=+0.190949921 container init 7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 09:49:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:20 compute-0 podman[135857]: 2026-01-26 09:49:20.277549552 +0000 UTC m=+0.206520388 container start 7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_montalcini, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 09:49:20 compute-0 podman[135857]: 2026-01-26 09:49:20.282968942 +0000 UTC m=+0.211939828 container attach 7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:49:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:20 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]: {
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:     "0": [
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:         {
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "devices": [
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "/dev/loop3"
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             ],
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "lv_name": "ceph_lv0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "lv_size": "21470642176",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "name": "ceph_lv0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "tags": {
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.cluster_name": "ceph",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.crush_device_class": "",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.encrypted": "0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.osd_id": "0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.type": "block",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.vdo": "0",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:                 "ceph.with_tpm": "0"
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             },
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "type": "block",
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:             "vg_name": "ceph_vg0"
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:         }
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]:     ]
Jan 26 09:49:20 compute-0 laughing_montalcini[135873]: }
Jan 26 09:49:20 compute-0 sudo[135699]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:20 compute-0 systemd[1]: libpod-7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717.scope: Deactivated successfully.
Jan 26 09:49:20 compute-0 podman[135857]: 2026-01-26 09:49:20.587921194 +0000 UTC m=+0.516892000 container died 7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bb18890fa28f3af2c90052579e60c0beb64bd7569fa00510f24e2036d5f54d7-merged.mount: Deactivated successfully.
Jan 26 09:49:20 compute-0 podman[135857]: 2026-01-26 09:49:20.640080338 +0000 UTC m=+0.569051134 container remove 7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:49:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:20.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:20 compute-0 systemd[1]: libpod-conmon-7d8c5b7c323158be505637b64c0fbd3274f793d33a96390dbee48af4c7c38717.scope: Deactivated successfully.
Jan 26 09:49:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:20.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:20 compute-0 sudo[135751]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:20 compute-0 sudo[135932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:49:20 compute-0 sudo[135932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:20 compute-0 sudo[135932]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:20 compute-0 sudo[135990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:49:20 compute-0 sudo[135990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:21 compute-0 podman[136129]: 2026-01-26 09:49:21.292526713 +0000 UTC m=+0.043425835 container create a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 09:49:21 compute-0 systemd[1]: Started libpod-conmon-a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c.scope.
Jan 26 09:49:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:49:21 compute-0 podman[136129]: 2026-01-26 09:49:21.274094985 +0000 UTC m=+0.024994117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:49:21 compute-0 podman[136129]: 2026-01-26 09:49:21.378382343 +0000 UTC m=+0.129281485 container init a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_driscoll, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:49:21 compute-0 podman[136129]: 2026-01-26 09:49:21.384688116 +0000 UTC m=+0.135587228 container start a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:49:21 compute-0 podman[136129]: 2026-01-26 09:49:21.388419428 +0000 UTC m=+0.139318540 container attach a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_driscoll, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Jan 26 09:49:21 compute-0 nostalgic_driscoll[136151]: 167 167
Jan 26 09:49:21 compute-0 systemd[1]: libpod-a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c.scope: Deactivated successfully.
Jan 26 09:49:21 compute-0 podman[136129]: 2026-01-26 09:49:21.395304908 +0000 UTC m=+0.146204060 container died a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-91239b2f61f3a2d340ee1b3a86ae35aa6c77e9b0401f24b339d40c55ad9440e7-merged.mount: Deactivated successfully.
Jan 26 09:49:21 compute-0 python3.9[136141]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:49:21 compute-0 podman[136129]: 2026-01-26 09:49:21.449741284 +0000 UTC m=+0.200640396 container remove a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_driscoll, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:49:21 compute-0 systemd[1]: libpod-conmon-a70803f89d7e61a5e2f30428e9474954826eadaf6278d7e91b74b5e342a9b94c.scope: Deactivated successfully.
Jan 26 09:49:21 compute-0 podman[136175]: 2026-01-26 09:49:21.641458054 +0000 UTC m=+0.053082340 container create cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:49:21 compute-0 systemd[1]: Started libpod-conmon-cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce.scope.
Jan 26 09:49:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:49:21 compute-0 podman[136175]: 2026-01-26 09:49:21.616800936 +0000 UTC m=+0.028425302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bfed7a34c0ee87c60a2dd30f200130bcada33209672d5a562e77ec065ec70d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bfed7a34c0ee87c60a2dd30f200130bcada33209672d5a562e77ec065ec70d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bfed7a34c0ee87c60a2dd30f200130bcada33209672d5a562e77ec065ec70d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bfed7a34c0ee87c60a2dd30f200130bcada33209672d5a562e77ec065ec70d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:21 compute-0 podman[136175]: 2026-01-26 09:49:21.731626153 +0000 UTC m=+0.143250469 container init cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:49:21 compute-0 podman[136175]: 2026-01-26 09:49:21.737543885 +0000 UTC m=+0.149168171 container start cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:49:21 compute-0 podman[136175]: 2026-01-26 09:49:21.742185073 +0000 UTC m=+0.153809359 container attach cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:49:22 compute-0 ceph-mon[74456]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:49:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:22 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:22 compute-0 lvm[136292]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:49:22 compute-0 lvm[136292]: VG ceph_vg0 finished
Jan 26 09:49:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:22 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:22 compute-0 epic_varahamihira[136192]: {}
Jan 26 09:49:22 compute-0 systemd[1]: libpod-cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce.scope: Deactivated successfully.
Jan 26 09:49:22 compute-0 podman[136175]: 2026-01-26 09:49:22.490499813 +0000 UTC m=+0.902124099 container died cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 09:49:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:22 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:22 compute-0 systemd[1]: libpod-cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce.scope: Consumed 1.239s CPU time.
Jan 26 09:49:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:22.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-35bfed7a34c0ee87c60a2dd30f200130bcada33209672d5a562e77ec065ec70d-merged.mount: Deactivated successfully.
Jan 26 09:49:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:49:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:22.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:49:22 compute-0 podman[136175]: 2026-01-26 09:49:22.696436754 +0000 UTC m=+1.108061040 container remove cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 26 09:49:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:22 compute-0 systemd[1]: libpod-conmon-cc19d8313a68dc982f4931bfce573fa5672fb20407b2614dea1426834fa482ce.scope: Deactivated successfully.
Jan 26 09:49:22 compute-0 sudo[135990]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:49:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:49:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:22 compute-0 sudo[136381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:49:22 compute-0 sudo[136381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:22 compute-0 sudo[136381]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:23 compute-0 python3.9[136456]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 09:49:23 compute-0 ceph-mon[74456]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:49:24 compute-0 python3.9[136606]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:49:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:24 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:24 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:24 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:49:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:24.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:49:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:24.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:24 compute-0 python3.9[136758]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:49:25 compute-0 sshd-session[135005]: Connection closed by 192.168.122.30 port 34958
Jan 26 09:49:25 compute-0 sshd-session[135002]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:49:25 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Jan 26 09:49:25 compute-0 systemd[1]: session-47.scope: Consumed 6.042s CPU time.
Jan 26 09:49:25 compute-0 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Jan 26 09:49:25 compute-0 systemd-logind[787]: Removed session 47.
Jan 26 09:49:25 compute-0 ceph-mon[74456]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:26 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:26] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Jan 26 09:49:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:26] "GET /metrics HTTP/1.1" 200 48254 "" "Prometheus/2.51.0"
Jan 26 09:49:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:26.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:26.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:26.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:49:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:28 compute-0 ceph-mon[74456]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:28 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:28 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0520002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:28 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:49:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:28.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:49:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:29 compute-0 ceph-mon[74456]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:30 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:30 compute-0 sshd-session[136787]: Accepted publickey for zuul from 192.168.122.30 port 58202 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:49:30 compute-0 systemd-logind[787]: New session 48 of user zuul.
Jan 26 09:49:30 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 26 09:49:30 compute-0 sshd-session[136787]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:49:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:30 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:30 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05200044c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:30.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:49:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:31 compute-0 python3.9[136942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:49:31 compute-0 ceph-mon[74456]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:49:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:32 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:32 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:32 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0510003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:32.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:49:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:32.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:49:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:33 compute-0 sudo[137098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcdgrlaaebdcnhnycncptgjttzsgdqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420972.5995564-105-65325775852043/AnsiballZ_file.py'
Jan 26 09:49:33 compute-0 sudo[137098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:33 compute-0 python3.9[137100]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:33 compute-0 sudo[137098]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:49:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:33 compute-0 sudo[137250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdeuixyrgtzoyvdterpomnquxqsqaqya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420973.4611897-105-271810074584456/AnsiballZ_file.py'
Jan 26 09:49:33 compute-0 sudo[137250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:33 compute-0 ceph-mon[74456]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:33 compute-0 python3.9[137252]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:33 compute-0 sudo[137250]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:34 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05200044c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:34 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f053c004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:34 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:49:34 compute-0 sudo[137404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccrejtcnouthewasirlkcwbwtqdzaizf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420974.169401-154-244914557424650/AnsiballZ_stat.py'
Jan 26 09:49:34 compute-0 sudo[137404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:34.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:34 compute-0 python3.9[137406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:34 compute-0 sudo[137404]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:35 compute-0 sudo[137527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihafkoyaerzilzuoenpzdhbqmorcuwxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420974.169401-154-244914557424650/AnsiballZ_copy.py'
Jan 26 09:49:35 compute-0 sudo[137527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:35 compute-0 python3.9[137529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420974.169401-154-244914557424650/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3d044d4b225209c9970df4d2de90a71d39cb35e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:35 compute-0 sudo[137527]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:35 compute-0 ceph-mon[74456]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:35 compute-0 sudo[137679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmvvjduowralugepxcugzobiefkdcfog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420975.54812-154-185303010070301/AnsiballZ_stat.py'
Jan 26 09:49:35 compute-0 sudo[137679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:35 compute-0 sudo[137682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:49:35 compute-0 sudo[137682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:35 compute-0 sudo[137682]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:36 compute-0 python3.9[137681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:36 compute-0 sudo[137679]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:36 compute-0 kernel: ganesha.nfsd[126572]: segfault at 50 ip 00007f05c716c32e sp 00007f054d7f9210 error 4 in libntirpc.so.5.8[7f05c7151000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 26 09:49:36 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:49:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[121953]: 26/01/2026 09:49:36 : epoch 69773828 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f05300043d0 fd 39 proxy ignored for local
Jan 26 09:49:36 compute-0 systemd[1]: Started Process Core Dump (PID 137786/UID 0).
Jan 26 09:49:36 compute-0 sudo[137831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqikstyvazrpsgdhntpfeezcrfxosjxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420975.54812-154-185303010070301/AnsiballZ_copy.py'
Jan 26 09:49:36 compute-0 sudo[137831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:36 compute-0 python3.9[137833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420975.54812-154-185303010070301/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=679f96efe917c3889f556f17807e671104af52ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:36 compute-0 sudo[137831]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:49:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:36.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:36] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:49:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:49:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:36.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:49:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:36.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:49:37 compute-0 sudo[137983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njkatvyxiicrfjtssrethyvtfzbjxekx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420976.7601082-154-176036592225546/AnsiballZ_stat.py'
Jan 26 09:49:37 compute-0 sudo[137983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:37 compute-0 python3.9[137985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:37 compute-0 sudo[137983]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:37 compute-0 systemd-coredump[137804]: Process 121957 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 56:
                                                    #0  0x00007f05c716c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:49:37 compute-0 systemd[1]: systemd-coredump@2-137786-0.service: Deactivated successfully.
Jan 26 09:49:37 compute-0 systemd[1]: systemd-coredump@2-137786-0.service: Consumed 1.164s CPU time.
Jan 26 09:49:37 compute-0 podman[138083]: 2026-01-26 09:49:37.603063247 +0000 UTC m=+0.029688116 container died deee9e05455ee19a4632830b7e1d3965523669bd607fcf6c6d188864c81f8076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6af6df3f472b597cbeda041d6e30699aca4734b039036c74a0c51adb9b83a7ff-merged.mount: Deactivated successfully.
Jan 26 09:49:37 compute-0 podman[138083]: 2026-01-26 09:49:37.642265352 +0000 UTC m=+0.068890191 container remove deee9e05455ee19a4632830b7e1d3965523669bd607fcf6c6d188864c81f8076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:49:37 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:49:37 compute-0 sudo[138124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyqxcbggipxcazolocxdovmlgclzvfhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420976.7601082-154-176036592225546/AnsiballZ_copy.py'
Jan 26 09:49:37 compute-0 sudo[138124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:37 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:49:37 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.744s CPU time.
Jan 26 09:49:37 compute-0 python3.9[138130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420976.7601082-154-176036592225546/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0aea469d8f590298437b996ede2ec564a1b3d712 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:37 compute-0 sudo[138124]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:37 compute-0 ceph-mon[74456]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:38 compute-0 sudo[138308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkfstedxoidsdqqcqwlszuiroasjicdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420978.1077943-281-74736575579457/AnsiballZ_file.py'
Jan 26 09:49:38 compute-0 sudo[138308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:38 compute-0 python3.9[138310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:38 compute-0 sudo[138308]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:38.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:38.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:39 compute-0 sudo[138460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jevogqoixfxwbmtznspgpityitbpslgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420978.7803633-281-30171719691368/AnsiballZ_file.py'
Jan 26 09:49:39 compute-0 sudo[138460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:39 compute-0 python3.9[138462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:39 compute-0 sudo[138460]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:39 compute-0 sudo[138612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coinymdfztpzuukhezoflphuupmbdroo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420979.4885287-329-888621168851/AnsiballZ_stat.py'
Jan 26 09:49:39 compute-0 sudo[138612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094939 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:49:39 compute-0 ceph-mon[74456]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:49:40 compute-0 python3.9[138614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:40 compute-0 sudo[138612]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:40 compute-0 sudo[138737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphwovcwnvpjeaijzoyiwtmgfmkqrtyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420979.4885287-329-888621168851/AnsiballZ_copy.py'
Jan 26 09:49:40 compute-0 sudo[138737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:40 compute-0 python3.9[138739]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420979.4885287-329-888621168851/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=19eb1b89635022cb395a28d94f9c129c30adceb9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:40 compute-0 sudo[138737]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:49:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:40.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:41 compute-0 sudo[138889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtrqqvpgnqvljllepkwccdertuwyjmgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420980.79201-329-222665506600437/AnsiballZ_stat.py'
Jan 26 09:49:41 compute-0 sudo[138889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:41 compute-0 python3.9[138891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:41 compute-0 sudo[138889]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:41 compute-0 sudo[139012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubnfojokbmtpiuysucegwugnojzftuze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420980.79201-329-222665506600437/AnsiballZ_copy.py'
Jan 26 09:49:41 compute-0 sudo[139012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:41 compute-0 python3.9[139014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420980.79201-329-222665506600437/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3190a221de07992f337d3e4a96f47a3d3dd4b35b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:41 compute-0 sudo[139012]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:42 compute-0 ceph-mon[74456]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:49:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/094942 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:49:42 compute-0 sudo[139166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooyijadxnfrnylptziwmvopixlsgtiam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420981.9985971-329-160751723493384/AnsiballZ_stat.py'
Jan 26 09:49:42 compute-0 sudo[139166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:42 compute-0 python3.9[139168]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:42 compute-0 sudo[139166]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:49:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:42.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:49:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:49:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:49:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:42 compute-0 sudo[139289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bujmurkpudkmasfsotpmjtmvhhfcopzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420981.9985971-329-160751723493384/AnsiballZ_copy.py'
Jan 26 09:49:42 compute-0 sudo[139289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:43 compute-0 python3.9[139291]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420981.9985971-329-160751723493384/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3937f4557af37315031f0e6d0180c47f6affc1a2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:43 compute-0 sudo[139289]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:43 compute-0 sudo[139441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diqhrsydcjkaatgbmgwfewkmmnaoclnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420983.376587-456-11548645787566/AnsiballZ_file.py'
Jan 26 09:49:43 compute-0 sudo[139441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:43 compute-0 python3.9[139443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:43 compute-0 ceph-mon[74456]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:43 compute-0 sudo[139441]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:44 compute-0 sudo[139595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnwnpzwopdvmyzmzctetypxcovsjkclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420984.1188738-456-74031782476322/AnsiballZ_file.py'
Jan 26 09:49:44 compute-0 sudo[139595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:44.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:44 compute-0 python3.9[139597]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:44.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:44 compute-0 sudo[139595]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:45 compute-0 sudo[139747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spybspbqrqypxrpmhfznwmmbigijbckx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420984.8885727-498-179426952574062/AnsiballZ_stat.py'
Jan 26 09:49:45 compute-0 sudo[139747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:45 compute-0 python3.9[139749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:45 compute-0 sudo[139747]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:45 compute-0 sudo[139870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpvhgbgplkofvgblovpodrwrtqdlknly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420984.8885727-498-179426952574062/AnsiballZ_copy.py'
Jan 26 09:49:45 compute-0 sudo[139870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:45 compute-0 python3.9[139872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420984.8885727-498-179426952574062/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8d1116df1a5565217a6b2353ae10050c730f6f70 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:45 compute-0 sudo[139870]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:46 compute-0 ceph-mon[74456]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:46 compute-0 sudo[140024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bskcaiublkrlzptzntaeqwpnfuimnwkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420985.9904938-498-164076306276571/AnsiballZ_stat.py'
Jan 26 09:49:46 compute-0 sudo[140024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:46 compute-0 python3.9[140026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:46 compute-0 sudo[140024]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:46] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:49:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:46] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:49:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:46.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:46.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:46 compute-0 sudo[140147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxprfdttbjtfiqmdvnyehjhtdnxpxqbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420985.9904938-498-164076306276571/AnsiballZ_copy.py'
Jan 26 09:49:46 compute-0 sudo[140147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:46.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:49:47 compute-0 python3.9[140149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420985.9904938-498-164076306276571/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3190a221de07992f337d3e4a96f47a3d3dd4b35b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:47 compute-0 sudo[140147]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:47 compute-0 sudo[140299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcxojudfnttdndabuxgfmdolicdnvtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420987.2655847-498-271356960192772/AnsiballZ_stat.py'
Jan 26 09:49:47 compute-0 sudo[140299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:47 compute-0 python3.9[140301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:47 compute-0 sudo[140299]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:48 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 3.
Jan 26 09:49:48 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:49:48 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.744s CPU time.
Jan 26 09:49:48 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:49:48 compute-0 sudo[140434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzkghwtzpunokkvrfrzbtwsjcfvcbols ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420987.2655847-498-271356960192772/AnsiballZ_copy.py'
Jan 26 09:49:48 compute-0 sudo[140434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:48 compute-0 ceph-mon[74456]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:48 compute-0 python3.9[140438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420987.2655847-498-271356960192772/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cdbdc526e17500d43bc37a4bb747e1a4e6893176 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:48 compute-0 sudo[140434]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:48 compute-0 podman[140468]: 2026-01-26 09:49:48.243454055 +0000 UTC m=+0.021794047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:49:48 compute-0 podman[140468]: 2026-01-26 09:49:48.359077089 +0000 UTC m=+0.137417061 container create 65f50e5443fc0a0f613b45e2608e94e4ee7e25dc951bd6d6085af5b9894254a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3c61f8987b0f8da97f1719b6515d20f62debfec016d46543e7fe8089e2a854/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3c61f8987b0f8da97f1719b6515d20f62debfec016d46543e7fe8089e2a854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3c61f8987b0f8da97f1719b6515d20f62debfec016d46543e7fe8089e2a854/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f3c61f8987b0f8da97f1719b6515d20f62debfec016d46543e7fe8089e2a854/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:49:48 compute-0 podman[140468]: 2026-01-26 09:49:48.41684646 +0000 UTC m=+0.195186442 container init 65f50e5443fc0a0f613b45e2608e94e4ee7e25dc951bd6d6085af5b9894254a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:49:48 compute-0 podman[140468]: 2026-01-26 09:49:48.424276716 +0000 UTC m=+0.202616688 container start 65f50e5443fc0a0f613b45e2608e94e4ee7e25dc951bd6d6085af5b9894254a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:49:48 compute-0 bash[140468]: 65f50e5443fc0a0f613b45e2608e94e4ee7e25dc951bd6d6085af5b9894254a8
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:49:48 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:49:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:49:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:49:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:48.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:49:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:49:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:49:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:48.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:49:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:49:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:49:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:49:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:49:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:49:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:49:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:49:49 compute-0 sudo[140677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aacrzvxbxzlvmxjmgmkseqkfpjhemhkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420989.0834095-667-192761316980586/AnsiballZ_file.py'
Jan 26 09:49:49 compute-0 sudo[140677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:49 compute-0 python3.9[140679]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:49 compute-0 sudo[140677]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:50 compute-0 sudo[140829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwckozohirfqdrscdeeqotkkdopigwhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420989.8713994-706-6665579459853/AnsiballZ_stat.py'
Jan 26 09:49:50 compute-0 sudo[140829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:50 compute-0 ceph-mon[74456]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:49:50 compute-0 python3.9[140831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:50 compute-0 sudo[140829]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 26 09:49:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:49:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:50.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:49:50 compute-0 sshd-session[140841]: Invalid user test from 157.245.76.178 port 36898
Jan 26 09:49:50 compute-0 sudo[140956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attdkzgesxsnqmsbkgnhpkpewbhvkmoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420989.8713994-706-6665579459853/AnsiballZ_copy.py'
Jan 26 09:49:50 compute-0 sudo[140956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:50 compute-0 sshd-session[140841]: Connection closed by invalid user test 157.245.76.178 port 36898 [preauth]
Jan 26 09:49:50 compute-0 python3.9[140958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420989.8713994-706-6665579459853/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a4f71bf0609e75a0e091c9100076ae4c4a7bed4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:50 compute-0 sudo[140956]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:51 compute-0 sudo[141108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujmxowjvmftyriwluibetcbcsyjuwfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420991.1485493-753-238603015113291/AnsiballZ_file.py'
Jan 26 09:49:51 compute-0 sudo[141108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:51 compute-0 ceph-mon[74456]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Jan 26 09:49:51 compute-0 python3.9[141110]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:51 compute-0 sudo[141108]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:52 compute-0 sudo[141260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcgmvzsomakguovzxokvovzuejujprae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420991.758955-772-277740969635628/AnsiballZ_stat.py'
Jan 26 09:49:52 compute-0 sudo[141260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:52 compute-0 python3.9[141262]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:52 compute-0 sudo[141260]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:52 compute-0 sudo[141385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqqzcvvkshduqxpacwyqebahxqtsmdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420991.758955-772-277740969635628/AnsiballZ_copy.py'
Jan 26 09:49:52 compute-0 sudo[141385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 26 09:49:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:52.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:52.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:52 compute-0 python3.9[141387]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420991.758955-772-277740969635628/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a4f71bf0609e75a0e091c9100076ae4c4a7bed4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:52 compute-0 sudo[141385]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:53 compute-0 sudo[141537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgvhwqptytrmecujzfylewwuqtpapavp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420993.0080016-817-92365807875245/AnsiballZ_file.py'
Jan 26 09:49:53 compute-0 sudo[141537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:53 compute-0 python3.9[141539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:53 compute-0 sudo[141537]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:54 compute-0 sudo[141689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsyifuapzmcciotyanadekcypsvebqyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420993.78295-842-71035180487718/AnsiballZ_stat.py'
Jan 26 09:49:54 compute-0 sudo[141689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:54 compute-0 ceph-mon[74456]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 26 09:49:54 compute-0 python3.9[141691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:54 compute-0 sudo[141689]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:54 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:49:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:49:54 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:49:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:49:54 compute-0 sudo[141814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twuqqaijxzvjykugqpmvqhiuothztrix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420993.78295-842-71035180487718/AnsiballZ_copy.py'
Jan 26 09:49:54 compute-0 sudo[141814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:54.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:49:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:54.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:49:54 compute-0 python3.9[141816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420993.78295-842-71035180487718/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a4f71bf0609e75a0e091c9100076ae4c4a7bed4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:54 compute-0 sudo[141814]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:55 compute-0 sudo[141966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pctdqqpkajewdyqzmwnquexpvhsjjces ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420995.0636895-888-201659135828391/AnsiballZ_file.py'
Jan 26 09:49:55 compute-0 sudo[141966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:55 compute-0 python3.9[141968]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:55 compute-0 sudo[141966]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:55 compute-0 sudo[142118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llkncnlnbpvtbdrkinrvollrarhhljhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420995.6713262-909-140282485840075/AnsiballZ_stat.py'
Jan 26 09:49:55 compute-0 sudo[142118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:56 compute-0 sudo[142121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:49:56 compute-0 sudo[142121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:49:56 compute-0 sudo[142121]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:56 compute-0 ceph-mon[74456]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:49:56 compute-0 python3.9[142120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:56 compute-0 sudo[142118]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:56] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:49:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:49:56] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:49:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:49:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:49:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:56.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:49:56 compute-0 sudo[142268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdngilhbanjkcdqpocbhglxgmcsrrenn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420995.6713262-909-140282485840075/AnsiballZ_copy.py'
Jan 26 09:49:56 compute-0 sudo[142268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:56.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:56 compute-0 python3.9[142270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420995.6713262-909-140282485840075/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a4f71bf0609e75a0e091c9100076ae4c4a7bed4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:56 compute-0 sudo[142268]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:49:56.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:49:57 compute-0 sudo[142420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeqtjhwltecgnpfsifcgrhmnxugovxdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420997.138976-957-129149027609293/AnsiballZ_file.py'
Jan 26 09:49:57 compute-0 sudo[142420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:49:57 compute-0 python3.9[142422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:57 compute-0 sudo[142420]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:58 compute-0 ceph-mon[74456]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:49:58 compute-0 sudo[142572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isqpoxpyuxknnhabfjxczjrwavckaiht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420997.9899633-984-98732240496769/AnsiballZ_stat.py'
Jan 26 09:49:58 compute-0 sudo[142572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:58 compute-0 python3.9[142576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:49:58 compute-0 sudo[142572]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:49:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:49:58.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:49:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:49:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:49:58.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:49:58 compute-0 sudo[142697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zggytjqudbqsldicltsozehunclgolwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420997.9899633-984-98732240496769/AnsiballZ_copy.py'
Jan 26 09:49:58 compute-0 sudo[142697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:59 compute-0 python3.9[142699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769420997.9899633-984-98732240496769/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a4f71bf0609e75a0e091c9100076ae4c4a7bed4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:49:59 compute-0 sudo[142697]: pam_unix(sudo:session): session closed for user root
Jan 26 09:49:59 compute-0 sudo[142849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-simahvgdemqynqedxdklugaskiokeutt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769420999.3198164-1033-50473246320529/AnsiballZ_file.py'
Jan 26 09:49:59 compute-0 sudo[142849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:49:59 compute-0 python3.9[142851]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:49:59 compute-0 sudo[142849]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 09:50:00 compute-0 ceph-mon[74456]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:50:00 compute-0 ceph-mon[74456]: overall HEALTH_OK
Jan 26 09:50:00 compute-0 sudo[143003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geyloufxzudklgrjuouizzktfutmshez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421000.1073308-1060-111124217951984/AnsiballZ_stat.py'
Jan 26 09:50:00 compute-0 sudo[143003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:50:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:50:00 compute-0 python3.9[143005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:00 compute-0 sudo[143003]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 26 09:50:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:50:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:00.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:50:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:00.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:01 compute-0 sudo[143138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oupzadksandfbyzxfbcckrfdhgneyndj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421000.1073308-1060-111124217951984/AnsiballZ_copy.py'
Jan 26 09:50:01 compute-0 sudo[143138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:01 compute-0 python3.9[143140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421000.1073308-1060-111124217951984/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a4f71bf0609e75a0e091c9100076ae4c4a7bed4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:01 compute-0 sudo[143138]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:01 compute-0 ceph-mon[74456]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 26 09:50:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095001 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:50:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:02 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:02 compute-0 sshd-session[136792]: Connection closed by 192.168.122.30 port 58202
Jan 26 09:50:02 compute-0 sshd-session[136787]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:50:02 compute-0 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Jan 26 09:50:02 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 26 09:50:02 compute-0 systemd[1]: session-48.scope: Consumed 23.928s CPU time.
Jan 26 09:50:02 compute-0 systemd-logind[787]: Removed session 48.
Jan 26 09:50:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:02 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:02 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:50:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:50:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:50:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:50:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:03 compute-0 ceph-mon[74456]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:50:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095004 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:50:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:04 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:04 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:04 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:50:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:04.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:50:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:04.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:50:05 compute-0 ceph-mon[74456]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:50:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:06] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 26 09:50:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:06] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 26 09:50:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:50:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:06.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:06.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:06.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:50:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:07 compute-0 sshd-session[143175]: Accepted publickey for zuul from 192.168.122.30 port 36816 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:50:07 compute-0 systemd-logind[787]: New session 49 of user zuul.
Jan 26 09:50:07 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 26 09:50:07 compute-0 sshd-session[143175]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:50:08 compute-0 ceph-mon[74456]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:50:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:08 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:08 compute-0 sudo[143330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwyqinhyscvlvieadrkrtasdkkpbimbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421007.9356613-21-246980765273171/AnsiballZ_file.py'
Jan 26 09:50:08 compute-0 sudo[143330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:08 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:08 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:08 compute-0 python3.9[143332]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:08 compute-0 sudo[143330]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:50:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:08.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:09 compute-0 sudo[143482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjcjiaenjrarvwnuijphtbtmmbanwdby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421008.8594668-57-96883465458921/AnsiballZ_stat.py'
Jan 26 09:50:09 compute-0 sudo[143482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:09 compute-0 python3.9[143484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:09 compute-0 sudo[143482]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:10 compute-0 ceph-mon[74456]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:50:10 compute-0 sudo[143605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmtnijrljpychfclpnviiajrpkbejpyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421008.8594668-57-96883465458921/AnsiballZ_copy.py'
Jan 26 09:50:10 compute-0 sudo[143605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:10 compute-0 python3.9[143607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421008.8594668-57-96883465458921/.source.conf _original_basename=ceph.conf follow=False checksum=d9847d470420fd34212d6cc1f2ab891aeddd27f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:10 compute-0 sudo[143605]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:10 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:10 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:10 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:50:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:10.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:10 compute-0 sudo[143759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxocdfpgpqngawinjxyssyzmhgwkphug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421010.4039128-57-218656307916573/AnsiballZ_stat.py'
Jan 26 09:50:10 compute-0 sudo[143759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:11 compute-0 python3.9[143761]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:11 compute-0 sudo[143759]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:11 compute-0 sudo[143882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seyyiehgqhlcylwruinfigdvrqjmhswq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421010.4039128-57-218656307916573/AnsiballZ_copy.py'
Jan 26 09:50:11 compute-0 sudo[143882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:11 compute-0 python3.9[143884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421010.4039128-57-218656307916573/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=e8137016e459ec15b04fac1b40fd6c611375a3cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:11 compute-0 sudo[143882]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:12 compute-0 ceph-mon[74456]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:50:12 compute-0 sshd-session[143178]: Connection closed by 192.168.122.30 port 36816
Jan 26 09:50:12 compute-0 sshd-session[143175]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:50:12 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 26 09:50:12 compute-0 systemd[1]: session-49.scope: Consumed 2.689s CPU time.
Jan 26 09:50:12 compute-0 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Jan 26 09:50:12 compute-0 systemd-logind[787]: Removed session 49.
Jan 26 09:50:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:12 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:12 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:12 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:12.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:12.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:14 compute-0 ceph-mon[74456]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:14 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:14 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec40031e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:14 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:14.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:14.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:16 compute-0 ceph-mon[74456]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:16 compute-0 sudo[143913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:50:16 compute-0 sudo[143913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:16 compute-0 sudo[143913]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:16 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:16 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:16 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:16] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 26 09:50:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:16] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 26 09:50:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:16.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:16.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:16.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:50:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:16.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:50:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:17 compute-0 sshd-session[143940]: Accepted publickey for zuul from 192.168.122.30 port 42048 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:50:17 compute-0 systemd-logind[787]: New session 50 of user zuul.
Jan 26 09:50:17 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 26 09:50:17 compute-0 sshd-session[143940]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:50:18 compute-0 ceph-mon[74456]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:18 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:18 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:18 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec40031e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:50:18
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'images', 'default.rgw.meta', 'volumes', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr']
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:50:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:50:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:18.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:50:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:18.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:50:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:50:19 compute-0 python3.9[144095]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:50:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095019 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:50:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:20 compute-0 ceph-mon[74456]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:20 compute-0 sudo[144249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmwvutwqlcfeeogvahnfugpxiieguckf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421019.7075946-57-138677398798367/AnsiballZ_file.py'
Jan 26 09:50:20 compute-0 sudo[144249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:20 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:20 compute-0 python3.9[144251]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:50:20 compute-0 sudo[144249]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:20 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:20 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:50:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:20.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:20.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:20 compute-0 sudo[144403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvgcthqtuqwkwhxplhdnanpxcbmfmyrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421020.5887568-57-24789163365055/AnsiballZ_file.py'
Jan 26 09:50:20 compute-0 sudo[144403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:21 compute-0 python3.9[144405]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:50:21 compute-0 sudo[144403]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:21 compute-0 python3.9[144555]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:50:22 compute-0 ceph-mon[74456]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:50:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:22 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 09:50:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8415 writes, 33K keys, 8415 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8415 writes, 1917 syncs, 4.39 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8415 writes, 33K keys, 8415 commit groups, 1.0 writes per commit group, ingest: 21.16 MB, 0.04 MB/s
                                           Interval WAL: 8415 writes, 1917 syncs, 4.39 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 09:50:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:22 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:22 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:22 compute-0 sudo[144707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awvadfwndlmcdskjisffxhngzxfraftb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421022.187364-126-11740703769470/AnsiballZ_seboolean.py'
Jan 26 09:50:22 compute-0 sudo[144707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:22.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:22.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:22 compute-0 python3.9[144709]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 26 09:50:23 compute-0 sudo[144710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:50:23 compute-0 sudo[144710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:23 compute-0 sudo[144710]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:23 compute-0 sudo[144735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:50:23 compute-0 sudo[144735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:23 compute-0 ceph-mon[74456]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:50:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:50:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:50:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:50:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:23 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:23 compute-0 sudo[144735]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:24 compute-0 sudo[144707]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:24 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:50:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:24 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:50:24 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:24 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:24 compute-0 sudo[144869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:50:24 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 26 09:50:24 compute-0 sudo[144869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:24 compute-0 sudo[144869]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:24 compute-0 sudo[144923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:50:24 compute-0 sudo[144923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:50:24 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:50:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d4525c5d0 =====
Jan 26 09:50:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d4525c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:24 compute-0 radosgw[96326]: beast: 0x7f3d4525c5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:24.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:25 compute-0 sudo[145038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxqvzwzurofnddzauccklzeewnkydzbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421024.4753225-156-266743744508218/AnsiballZ_setup.py'
Jan 26 09:50:25 compute-0 sudo[145038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:25 compute-0 podman[145044]: 2026-01-26 09:50:25.177431432 +0000 UTC m=+0.057061124 container create 33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:50:25 compute-0 systemd[1]: Started libpod-conmon-33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a.scope.
Jan 26 09:50:25 compute-0 podman[145044]: 2026-01-26 09:50:25.15268179 +0000 UTC m=+0.032311562 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:50:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:50:25 compute-0 podman[145044]: 2026-01-26 09:50:25.275464774 +0000 UTC m=+0.155094546 container init 33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:50:25 compute-0 podman[145044]: 2026-01-26 09:50:25.291762543 +0000 UTC m=+0.171392235 container start 33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:50:25 compute-0 podman[145044]: 2026-01-26 09:50:25.298276106 +0000 UTC m=+0.177905888 container attach 33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 09:50:25 compute-0 sharp_heyrovsky[145060]: 167 167
Jan 26 09:50:25 compute-0 systemd[1]: libpod-33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a.scope: Deactivated successfully.
Jan 26 09:50:25 compute-0 conmon[145060]: conmon 33a714c9f20ea1e6fdb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a.scope/container/memory.events
Jan 26 09:50:25 compute-0 podman[145044]: 2026-01-26 09:50:25.3031839 +0000 UTC m=+0.182813632 container died 33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:50:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e985d9934a6e0e80ce925b003194af69e3b1d323e86b3ecde134f095bbace056-merged.mount: Deactivated successfully.
Jan 26 09:50:25 compute-0 podman[145044]: 2026-01-26 09:50:25.365611617 +0000 UTC m=+0.245241349 container remove 33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:50:25 compute-0 systemd[1]: libpod-conmon-33a714c9f20ea1e6fdb2e80104a4538731151f3567552e7342df0c02db405d3a.scope: Deactivated successfully.
Jan 26 09:50:25 compute-0 python3.9[145042]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:50:25 compute-0 podman[145091]: 2026-01-26 09:50:25.559979868 +0000 UTC m=+0.044662613 container create d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:50:25 compute-0 systemd[1]: Started libpod-conmon-d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab.scope.
Jan 26 09:50:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1fc55bf8fc19fb9eff2b4c13060e7d2e3487c8c24f499a781fb331e63ce3a2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1fc55bf8fc19fb9eff2b4c13060e7d2e3487c8c24f499a781fb331e63ce3a2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1fc55bf8fc19fb9eff2b4c13060e7d2e3487c8c24f499a781fb331e63ce3a2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1fc55bf8fc19fb9eff2b4c13060e7d2e3487c8c24f499a781fb331e63ce3a2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:25 compute-0 podman[145091]: 2026-01-26 09:50:25.540895569 +0000 UTC m=+0.025578334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:50:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1fc55bf8fc19fb9eff2b4c13060e7d2e3487c8c24f499a781fb331e63ce3a2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:25 compute-0 podman[145091]: 2026-01-26 09:50:25.654710257 +0000 UTC m=+0.139393012 container init d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 09:50:25 compute-0 podman[145091]: 2026-01-26 09:50:25.662147454 +0000 UTC m=+0.146830189 container start d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:50:25 compute-0 podman[145091]: 2026-01-26 09:50:25.664916143 +0000 UTC m=+0.149598888 container attach d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feistel, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:50:25 compute-0 sudo[145038]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:25 compute-0 ceph-mon[74456]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:26 compute-0 friendly_feistel[145107]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:50:26 compute-0 friendly_feistel[145107]: --> All data devices are unavailable
Jan 26 09:50:26 compute-0 systemd[1]: libpod-d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab.scope: Deactivated successfully.
Jan 26 09:50:26 compute-0 podman[145091]: 2026-01-26 09:50:26.037469558 +0000 UTC m=+0.522152333 container died d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feistel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 09:50:26 compute-0 sudo[145197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbvebncirbpdwgptjdjoexffecxsjlaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421024.4753225-156-266743744508218/AnsiballZ_dnf.py'
Jan 26 09:50:26 compute-0 sudo[145197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1fc55bf8fc19fb9eff2b4c13060e7d2e3487c8c24f499a781fb331e63ce3a2f-merged.mount: Deactivated successfully.
Jan 26 09:50:26 compute-0 podman[145091]: 2026-01-26 09:50:26.115494388 +0000 UTC m=+0.600177133 container remove d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:50:26 compute-0 systemd[1]: libpod-conmon-d017c45aed911cbb50fd17f5c4917bf348c861b4715401f3812a08d62199ceab.scope: Deactivated successfully.
Jan 26 09:50:26 compute-0 sudo[144923]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:26 compute-0 sudo[145210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:50:26 compute-0 sudo[145210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:26 compute-0 sudo[145210]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:26 compute-0 sudo[145235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:50:26 compute-0 sudo[145235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:26 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:26 compute-0 python3.9[145206]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:50:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:26 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:26 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:26] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:50:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:26] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 09:50:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:26 compute-0 podman[145303]: 2026-01-26 09:50:26.843658843 +0000 UTC m=+0.051596117 container create 81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_lewin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:50:26 compute-0 systemd[1]: Started libpod-conmon-81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d.scope.
Jan 26 09:50:26 compute-0 podman[145303]: 2026-01-26 09:50:26.820857841 +0000 UTC m=+0.028795095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:50:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:50:26 compute-0 podman[145303]: 2026-01-26 09:50:26.929527809 +0000 UTC m=+0.137465073 container init 81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:50:26 compute-0 podman[145303]: 2026-01-26 09:50:26.943609624 +0000 UTC m=+0.151546898 container start 81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 09:50:26 compute-0 practical_lewin[145319]: 167 167
Jan 26 09:50:26 compute-0 systemd[1]: libpod-81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d.scope: Deactivated successfully.
Jan 26 09:50:26 compute-0 podman[145303]: 2026-01-26 09:50:26.951317917 +0000 UTC m=+0.159255171 container attach 81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_lewin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:50:26 compute-0 podman[145303]: 2026-01-26 09:50:26.951947842 +0000 UTC m=+0.159885086 container died 81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 09:50:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eeab1c51891518478c33c773dce9ae48d5e41d36e469324c5d0ba85fe93b73d-merged.mount: Deactivated successfully.
Jan 26 09:50:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:50:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:26.983Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:50:26 compute-0 podman[145303]: 2026-01-26 09:50:26.988330836 +0000 UTC m=+0.196268070 container remove 81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 09:50:27 compute-0 systemd[1]: libpod-conmon-81951bc851864fe7a3f25d0af188cea942936489d03fe87acfc7de622796918d.scope: Deactivated successfully.
Jan 26 09:50:27 compute-0 podman[145345]: 2026-01-26 09:50:27.16367405 +0000 UTC m=+0.050952011 container create 2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_black, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:50:27 compute-0 systemd[1]: Started libpod-conmon-2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c.scope.
Jan 26 09:50:27 compute-0 podman[145345]: 2026-01-26 09:50:27.139048581 +0000 UTC m=+0.026326522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:50:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4022e3486acc127833ecc53ee450e5fdf5873a7ae1c018b45ebd782390148ce4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4022e3486acc127833ecc53ee450e5fdf5873a7ae1c018b45ebd782390148ce4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4022e3486acc127833ecc53ee450e5fdf5873a7ae1c018b45ebd782390148ce4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4022e3486acc127833ecc53ee450e5fdf5873a7ae1c018b45ebd782390148ce4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:27 compute-0 podman[145345]: 2026-01-26 09:50:27.283462687 +0000 UTC m=+0.170740628 container init 2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:50:27 compute-0 podman[145345]: 2026-01-26 09:50:27.299497441 +0000 UTC m=+0.186775362 container start 2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:50:27 compute-0 podman[145345]: 2026-01-26 09:50:27.303562063 +0000 UTC m=+0.190839984 container attach 2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:50:27 compute-0 strange_black[145362]: {
Jan 26 09:50:27 compute-0 strange_black[145362]:     "0": [
Jan 26 09:50:27 compute-0 strange_black[145362]:         {
Jan 26 09:50:27 compute-0 strange_black[145362]:             "devices": [
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "/dev/loop3"
Jan 26 09:50:27 compute-0 strange_black[145362]:             ],
Jan 26 09:50:27 compute-0 strange_black[145362]:             "lv_name": "ceph_lv0",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "lv_size": "21470642176",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "name": "ceph_lv0",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "tags": {
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.cluster_name": "ceph",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.crush_device_class": "",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.encrypted": "0",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.osd_id": "0",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.type": "block",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.vdo": "0",
Jan 26 09:50:27 compute-0 strange_black[145362]:                 "ceph.with_tpm": "0"
Jan 26 09:50:27 compute-0 strange_black[145362]:             },
Jan 26 09:50:27 compute-0 strange_black[145362]:             "type": "block",
Jan 26 09:50:27 compute-0 strange_black[145362]:             "vg_name": "ceph_vg0"
Jan 26 09:50:27 compute-0 strange_black[145362]:         }
Jan 26 09:50:27 compute-0 strange_black[145362]:     ]
Jan 26 09:50:27 compute-0 strange_black[145362]: }
Jan 26 09:50:27 compute-0 podman[145345]: 2026-01-26 09:50:27.659682346 +0000 UTC m=+0.546960277 container died 2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_black, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:50:27 compute-0 systemd[1]: libpod-2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c.scope: Deactivated successfully.
Jan 26 09:50:27 compute-0 sudo[145197]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4022e3486acc127833ecc53ee450e5fdf5873a7ae1c018b45ebd782390148ce4-merged.mount: Deactivated successfully.
Jan 26 09:50:27 compute-0 podman[145345]: 2026-01-26 09:50:27.709965368 +0000 UTC m=+0.597243289 container remove 2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_black, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:50:27 compute-0 systemd[1]: libpod-conmon-2f6916d689944155e38cf9148c0b8e37edbeb07a31a29ec4b498255b26aed55c.scope: Deactivated successfully.
Jan 26 09:50:27 compute-0 ceph-mon[74456]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:27 compute-0 sudo[145235]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:27 compute-0 sudo[145407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:50:27 compute-0 sudo[145407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:27 compute-0 sudo[145407]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:27 compute-0 sudo[145432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:50:27 compute-0 sudo[145432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:28 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:28 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:50:28 compute-0 podman[145552]: 2026-01-26 09:50:28.331709561 +0000 UTC m=+0.019867889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:50:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:28 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:28 compute-0 podman[145552]: 2026-01-26 09:50:28.521514107 +0000 UTC m=+0.209672405 container create ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 09:50:28 compute-0 sudo[145639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njzdeljfovjsamdxkxfnirqfkktacxqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421027.9755423-192-8270285337214/AnsiballZ_systemd.py'
Jan 26 09:50:28 compute-0 sudo[145639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:28 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:28 compute-0 systemd[1]: Started libpod-conmon-ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1.scope.
Jan 26 09:50:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:50:28 compute-0 podman[145552]: 2026-01-26 09:50:28.665593335 +0000 UTC m=+0.353751653 container init ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:50:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:28 compute-0 podman[145552]: 2026-01-26 09:50:28.684123551 +0000 UTC m=+0.372281859 container start ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:50:28 compute-0 podman[145552]: 2026-01-26 09:50:28.687949357 +0000 UTC m=+0.376107675 container attach ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:50:28 compute-0 zen_tharp[145644]: 167 167
Jan 26 09:50:28 compute-0 systemd[1]: libpod-ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1.scope: Deactivated successfully.
Jan 26 09:50:28 compute-0 podman[145552]: 2026-01-26 09:50:28.694890041 +0000 UTC m=+0.383048339 container died ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 09:50:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-50fcf18295c80887a43fb642ea82bd32c436a673bf0f851291ae7c5607eb4291-merged.mount: Deactivated successfully.
Jan 26 09:50:28 compute-0 podman[145552]: 2026-01-26 09:50:28.735908992 +0000 UTC m=+0.424067280 container remove ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:50:28 compute-0 systemd[1]: libpod-conmon-ca764ecb2647b5aba0ead69fe3313f943a05dfbe27bdeabdc13d3a84e9d6edf1.scope: Deactivated successfully.
Jan 26 09:50:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 09:50:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:28.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 09:50:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:28 compute-0 python3.9[145641]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:50:28 compute-0 podman[145668]: 2026-01-26 09:50:28.944321224 +0000 UTC m=+0.070584053 container create 0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 09:50:28 compute-0 sudo[145639]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:28 compute-0 systemd[1]: Started libpod-conmon-0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7.scope.
Jan 26 09:50:29 compute-0 podman[145668]: 2026-01-26 09:50:28.921128973 +0000 UTC m=+0.047391852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:50:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4387e34c05f5c591477a390fcab2855499aa702b09bc20a0db4bcb755c48cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4387e34c05f5c591477a390fcab2855499aa702b09bc20a0db4bcb755c48cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4387e34c05f5c591477a390fcab2855499aa702b09bc20a0db4bcb755c48cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4387e34c05f5c591477a390fcab2855499aa702b09bc20a0db4bcb755c48cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:50:29 compute-0 podman[145668]: 2026-01-26 09:50:29.047596508 +0000 UTC m=+0.173859337 container init 0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 09:50:29 compute-0 podman[145668]: 2026-01-26 09:50:29.054766378 +0000 UTC m=+0.181029217 container start 0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 09:50:29 compute-0 podman[145668]: 2026-01-26 09:50:29.05841606 +0000 UTC m=+0.184678909 container attach 0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 09:50:29 compute-0 lvm[145875]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:50:29 compute-0 lvm[145875]: VG ceph_vg0 finished
Jan 26 09:50:29 compute-0 beautiful_shirley[145688]: {}
Jan 26 09:50:29 compute-0 sudo[145916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hewdelxjysbmjrkyuuhvrskxxbzhmowt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769421029.3638136-216-85454948858779/AnsiballZ_edpm_nftables_snippet.py'
Jan 26 09:50:29 compute-0 sudo[145916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:29 compute-0 systemd[1]: libpod-0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7.scope: Deactivated successfully.
Jan 26 09:50:29 compute-0 systemd[1]: libpod-0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7.scope: Consumed 1.278s CPU time.
Jan 26 09:50:29 compute-0 podman[145920]: 2026-01-26 09:50:29.876622706 +0000 UTC m=+0.034237620 container died 0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:50:29 compute-0 ceph-mon[74456]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:30 compute-0 python3[145918]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 26 09:50:30 compute-0 sudo[145916]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4387e34c05f5c591477a390fcab2855499aa702b09bc20a0db4bcb755c48cbc-merged.mount: Deactivated successfully.
Jan 26 09:50:30 compute-0 podman[145920]: 2026-01-26 09:50:30.137517138 +0000 UTC m=+0.295132032 container remove 0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_shirley, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 09:50:30 compute-0 systemd[1]: libpod-conmon-0ca326f5cd36856cd46b02ca80ac82d890c6a048a5d07d71f6ea6b86d72c43e7.scope: Deactivated successfully.
Jan 26 09:50:30 compute-0 sudo[145432]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:50:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:30 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:50:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:30 compute-0 sudo[145962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:50:30 compute-0 sudo[145962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:30 compute-0 sudo[145962]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:30 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:30 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:50:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:30.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:50:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:30.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:50:30 compute-0 sudo[146111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooudhxnsvrleifogbgxixkwsgbfpagjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421030.471251-243-147641356682223/AnsiballZ_file.py'
Jan 26 09:50:30 compute-0 sudo[146111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:31 compute-0 python3.9[146113]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:31 compute-0 sudo[146111]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:31 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:31 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:50:31 compute-0 ceph-mon[74456]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:50:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:31 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:50:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:31 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:50:31 compute-0 sudo[146263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojwnlqxbgcxjtbymipqyxjwazdkgcwdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421031.3049073-267-117616301260716/AnsiballZ_stat.py'
Jan 26 09:50:31 compute-0 sudo[146263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:32 compute-0 python3.9[146265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:32 compute-0 sudo[146263]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:32 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:32 compute-0 sudo[146343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzlhkqulbjccjhuzvmikmlkqoiznhrqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421031.3049073-267-117616301260716/AnsiballZ_file.py'
Jan 26 09:50:32 compute-0 sudo[146343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:32 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:32 compute-0 python3.9[146345]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:32 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:32 compute-0 sudo[146343]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:50:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:32.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:33 compute-0 sudo[146495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdzlqxvtrcibwiesbqrgaopbevzduwvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421032.8944445-303-183289920481089/AnsiballZ_stat.py'
Jan 26 09:50:33 compute-0 sudo[146495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:33 compute-0 python3.9[146497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:33 compute-0 sudo[146495]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:50:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:33 compute-0 ceph-mon[74456]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:50:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:33 compute-0 sudo[146573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orlbtgjyxbvyrarctulysmbxhfivwgyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421032.8944445-303-183289920481089/AnsiballZ_file.py'
Jan 26 09:50:33 compute-0 sudo[146573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:33 compute-0 python3.9[146575]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.diw70_z2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:34 compute-0 sudo[146573]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:34 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:34 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:50:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:34 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:34 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:34 compute-0 sshd-session[146600]: Invalid user test from 157.245.76.178 port 47198
Jan 26 09:50:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:34 compute-0 sshd-session[146600]: Connection closed by invalid user test 157.245.76.178 port 47198 [preauth]
Jan 26 09:50:34 compute-0 sudo[146731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbcvswjtjxhjigcrhqsdtxdvtmehtoyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421034.350827-339-257468147608854/AnsiballZ_stat.py'
Jan 26 09:50:34 compute-0 sudo[146731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:34.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:34.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:34 compute-0 python3.9[146733]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:34 compute-0 sudo[146731]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:35 compute-0 sudo[146809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgyjigunjhotqcpjjlcofdkijrawragb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421034.350827-339-257468147608854/AnsiballZ_file.py'
Jan 26 09:50:35 compute-0 sudo[146809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:35 compute-0 python3.9[146811]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:35 compute-0 sudo[146809]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:35 compute-0 ceph-mon[74456]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:36 compute-0 sudo[146935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:50:36 compute-0 sudo[146935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:36 compute-0 sudo[146935]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:36 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:36 compute-0 sudo[146988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymlbwkmufwbdkqzpqqdvauyswiilgayg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421035.7138827-378-89671429114593/AnsiballZ_command.py'
Jan 26 09:50:36 compute-0 sudo[146988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:36 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:36 compute-0 python3.9[146990]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:50:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:36 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:36 compute-0 sudo[146988]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:36] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:50:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:36] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:50:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:36.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:36.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:36.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:50:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:36.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:50:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:36.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:50:37 compute-0 sudo[147141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxwdcerwmyrrexdglhvgjkwqmdtyaxbp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769421036.8417125-402-218983034198028/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 09:50:37 compute-0 sudo[147141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:37 compute-0 python3[147143]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 09:50:37 compute-0 sudo[147141]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:37 compute-0 ceph-mon[74456]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:38 compute-0 sudo[147293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veswzzqsaycmwnauvujzjtoheimzyewh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421037.912873-426-52593131886195/AnsiballZ_stat.py'
Jan 26 09:50:38 compute-0 sudo[147293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:38 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:38 compute-0 python3.9[147295]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:38 compute-0 sudo[147293]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:38 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:38 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:38.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:38.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:39 compute-0 sudo[147420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwgtjwxblymkzegvazpruzfhdxjlxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421037.912873-426-52593131886195/AnsiballZ_copy.py'
Jan 26 09:50:39 compute-0 sudo[147420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:39 compute-0 python3.9[147422]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421037.912873-426-52593131886195/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:39 compute-0 sudo[147420]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:39 compute-0 ceph-mon[74456]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:40 compute-0 sudo[147572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwnvjvuunzkbpdjnzqmisemsdwrtlwjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421039.5372283-471-240813321194802/AnsiballZ_stat.py'
Jan 26 09:50:40 compute-0 sudo[147572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:40 compute-0 python3.9[147574]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:40 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:40 compute-0 sudo[147572]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:40 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:40 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:40 compute-0 sudo[147699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpcdflbelxlzsikislxfptkjomsagswd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421039.5372283-471-240813321194802/AnsiballZ_copy.py'
Jan 26 09:50:40 compute-0 sudo[147699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:40.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:40.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:40 compute-0 python3.9[147701]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421039.5372283-471-240813321194802/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:40 compute-0 sudo[147699]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095041 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:50:41 compute-0 sudo[147851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqbecblrwnobebxfssyfmcqgjthswttq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421041.4308052-516-184247223415906/AnsiballZ_stat.py'
Jan 26 09:50:41 compute-0 sudo[147851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:41 compute-0 python3.9[147853]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:42 compute-0 sudo[147851]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:42 compute-0 ceph-mon[74456]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:50:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:42 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:42 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:42 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:50:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:50:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:42.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:50:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:50:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:50:42 compute-0 sudo[147978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifaaoirndxkefrbthdlcnjikdksdasam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421041.4308052-516-184247223415906/AnsiballZ_copy.py'
Jan 26 09:50:42 compute-0 sudo[147978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:43 compute-0 python3.9[147980]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421041.4308052-516-184247223415906/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:43 compute-0 sudo[147978]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:43 compute-0 sudo[148130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwczxlrckdcltlrjkbcezeaeguereehq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421043.5990133-561-202721464937819/AnsiballZ_stat.py'
Jan 26 09:50:43 compute-0 sudo[148130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:44 compute-0 ceph-mon[74456]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:50:44 compute-0 python3.9[148132]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:44 compute-0 sudo[148130]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:44 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:44 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:44 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:44 compute-0 sudo[148257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggaxvqpiibnfaogbiqpcruhbferkyapi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421043.5990133-561-202721464937819/AnsiballZ_copy.py'
Jan 26 09:50:44 compute-0 sudo[148257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:50:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:44.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:50:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:50:44 compute-0 python3.9[148259]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421043.5990133-561-202721464937819/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:44 compute-0 sudo[148257]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:45 compute-0 sudo[148409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cippvppxxkqokuhstnckaucxgkpxqeqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421045.1597447-606-67826623391288/AnsiballZ_stat.py'
Jan 26 09:50:45 compute-0 sudo[148409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:45 compute-0 python3.9[148411]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:50:45 compute-0 sudo[148409]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:46 compute-0 ceph-mon[74456]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:50:46 compute-0 sudo[148534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmyvszvcqdozseetnuywodgrwldxetzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421045.1597447-606-67826623391288/AnsiballZ_copy.py'
Jan 26 09:50:46 compute-0 sudo[148534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:46 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:46 compute-0 python3.9[148536]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421045.1597447-606-67826623391288/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:46 compute-0 sudo[148534]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:46 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:46 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:46] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:50:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:46] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:50:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:50:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:46.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:50:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:50:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:50:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:46.986Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:50:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:46.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:50:47 compute-0 sudo[148688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpatnvhqdllzpsqmkhntnyohexmrctbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421046.817679-651-137367243795698/AnsiballZ_file.py'
Jan 26 09:50:47 compute-0 sudo[148688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:47 compute-0 python3.9[148690]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:47 compute-0 sudo[148688]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:48 compute-0 ceph-mon[74456]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:48 compute-0 sudo[148840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjiaklrabtfilpkbauetxbuokaojyust ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421047.8740287-675-133550143225101/AnsiballZ_command.py'
Jan 26 09:50:48 compute-0 sudo[148840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:48 compute-0 python3.9[148842]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:50:48 compute-0 sudo[148840]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:48 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea0002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:50:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:50:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:50:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:50:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:50:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:50:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:50:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:48.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:50:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:50:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:50:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:50:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:50:49 compute-0 sudo[148997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euifkeczntaygrjcvnnnkpmpebtvhsam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421048.7494562-699-152290327064053/AnsiballZ_blockinfile.py'
Jan 26 09:50:49 compute-0 sudo[148997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:49 compute-0 python3.9[148999]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:49 compute-0 sudo[148997]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:50 compute-0 sudo[149149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbrlodrmebipfgjpeatxphquwzaymipj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421049.848372-726-190468351683438/AnsiballZ_command.py'
Jan 26 09:50:50 compute-0 sudo[149149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:50 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:50 compute-0 ceph-mon[74456]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:50 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:50 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:50 compute-0 python3.9[149153]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:50:50 compute-0 sudo[149149]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:51 compute-0 sudo[149305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omaobtmgabmawyensptypuqcztlmiaju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421050.8413227-750-21065434975778/AnsiballZ_stat.py'
Jan 26 09:50:51 compute-0 sudo[149305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:51 compute-0 python3.9[149307]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:50:51 compute-0 sudo[149305]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:51 compute-0 ceph-mon[74456]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:50:52 compute-0 sudo[149459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mttdkirvwyglzvumrvcmqsmfwmwpzrat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421051.6992662-774-94195005984471/AnsiballZ_command.py'
Jan 26 09:50:52 compute-0 sudo[149459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:52 compute-0 python3.9[149461]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:50:52 compute-0 sudo[149459]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:52 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc001110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:52 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:52 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:50:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:52.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:50:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:52.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:52 compute-0 sudo[149616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzbrmeeqwtaqfnlypbywkokmcltbiole ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421052.6734388-798-122364121171216/AnsiballZ_file.py'
Jan 26 09:50:52 compute-0 sudo[149616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:53 compute-0 python3.9[149618]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:50:53 compute-0 sudo[149616]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:53 compute-0 ceph-mon[74456]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:50:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095053 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:50:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:54 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:54 compute-0 python3.9[149770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:50:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:54 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc001110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:54 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:50:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:50:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:54.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:50:55 compute-0 sudo[149921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zczzpciuauboxlajkmhnwtmuakhkrivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421055.4619908-918-84023078493072/AnsiballZ_command.py'
Jan 26 09:50:55 compute-0 sudo[149921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:55 compute-0 ceph-mon[74456]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:50:55 compute-0 python3.9[149923]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:50:55 compute-0 ovs-vsctl[149924]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 26 09:50:56 compute-0 sudo[149921]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:56 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:56 compute-0 sudo[149951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:50:56 compute-0 sudo[149951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:50:56 compute-0 sudo[149951]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:56 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:56 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:56] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Jan 26 09:50:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:50:56] "GET /metrics HTTP/1.1" 200 48262 "" "Prometheus/2.51.0"
Jan 26 09:50:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:50:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:56.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:50:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:50:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:56.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:50:56 compute-0 sudo[150101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xigguqituedwnngeouzimadabjjejqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421056.5574074-945-208219486120542/AnsiballZ_command.py'
Jan 26 09:50:56 compute-0 sudo[150101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:50:56.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:50:57 compute-0 python3.9[150103]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:50:57 compute-0 sudo[150101]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:50:57 compute-0 sudo[150256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwfknfmkirrcblkeqlevkrwxtqlmtudc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421057.4792087-969-240672137614848/AnsiballZ_command.py'
Jan 26 09:50:57 compute-0 sudo[150256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:57 compute-0 ceph-mon[74456]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:57 compute-0 python3.9[150258]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:50:58 compute-0 ovs-vsctl[150259]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 26 09:50:58 compute-0 sudo[150256]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:58 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:58 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:50:58 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:50:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:50:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:50:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:50:58.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:50:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:50:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:50:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:50:58.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:50:58 compute-0 python3.9[150411]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:50:59 compute-0 sudo[150563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lklmpdfqpcribiaayboyqopuuwbloefb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421059.185036-1020-173888260731424/AnsiballZ_file.py'
Jan 26 09:50:59 compute-0 sudo[150563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:50:59 compute-0 python3.9[150565]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:50:59 compute-0 sudo[150563]: pam_unix(sudo:session): session closed for user root
Jan 26 09:50:59 compute-0 ceph-mon[74456]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:51:00 compute-0 sudo[150715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqadpxyjmygdgippqpyibmeauvkbyode ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421059.9799337-1044-251261552812631/AnsiballZ_stat.py'
Jan 26 09:51:00 compute-0 sudo[150715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:00 compute-0 python3.9[150719]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:00 compute-0 sudo[150715]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:00 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 26 09:51:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:00.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 26 09:51:00 compute-0 sudo[150795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwctuoiyjeawixzcbnokvmwgxoelrzxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421059.9799337-1044-251261552812631/AnsiballZ_file.py'
Jan 26 09:51:00 compute-0 sudo[150795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:01 compute-0 python3.9[150797]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:01 compute-0 sudo[150795]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:01 compute-0 sudo[150947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnixiurxboeleizpiklsmtxergzeplhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421061.414033-1044-73012374414406/AnsiballZ_stat.py'
Jan 26 09:51:01 compute-0 sudo[150947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:01 compute-0 python3.9[150949]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:02 compute-0 sudo[150947]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:02 compute-0 ceph-mon[74456]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:02 compute-0 sudo[151027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txykeroxquzxcpcufwuqoxxuwwhpudlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421061.414033-1044-73012374414406/AnsiballZ_file.py'
Jan 26 09:51:02 compute-0 sudo[151027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:02 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:02 compute-0 python3.9[151029]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:02 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc0022a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:02 compute-0 sudo[151027]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:02 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:51:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:02.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:02.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:03 compute-0 sudo[151179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umgoxbhzvqkrwtjyjeunpcxwxojsaoxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421062.8166568-1113-80846586256592/AnsiballZ_file.py'
Jan 26 09:51:03 compute-0 sudo[151179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:03 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:51:03 compute-0 python3.9[151181]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:03 compute-0 sudo[151179]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:51:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:04 compute-0 ceph-mon[74456]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:51:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:04 compute-0 sudo[151331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtsxnhzxygpgdfrhmrkdoxzljfkgganc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421063.7156076-1137-104899705380342/AnsiballZ_stat.py'
Jan 26 09:51:04 compute-0 sudo[151331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:04 compute-0 python3.9[151333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:04 compute-0 sudo[151331]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:04 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:04 compute-0 sudo[151411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltbegzwfuephoqdczyjsevbodrwcdfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421063.7156076-1137-104899705380342/AnsiballZ_file.py'
Jan 26 09:51:04 compute-0 sudo[151411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:04 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:04 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 09:51:04 compute-0 python3.9[151413]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:04 compute-0 sudo[151411]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:04.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:05 compute-0 sudo[151563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrucjuqfjqapvtyxhftdbtxtfmhzscaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421065.1595304-1173-158252820384774/AnsiballZ_stat.py'
Jan 26 09:51:05 compute-0 sudo[151563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:05 compute-0 python3.9[151565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:05 compute-0 sudo[151563]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:06 compute-0 ceph-mon[74456]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 09:51:06 compute-0 sudo[151641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkqhpkvshkjwonntnbzgerkcdiwxijgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421065.1595304-1173-158252820384774/AnsiballZ_file.py'
Jan 26 09:51:06 compute-0 sudo[151641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:06 compute-0 python3.9[151643]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:06 compute-0 sudo[151641]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:06 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:06] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:06] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 09:51:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:06.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:06.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:06.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:06.988Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:51:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:06.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:51:07 compute-0 sudo[151795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrkwnqatoarjecqahurbnsiouswmwxow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421066.7455003-1209-171437137215724/AnsiballZ_systemd.py'
Jan 26 09:51:07 compute-0 sudo[151795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:07 compute-0 python3.9[151797]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:51:07 compute-0 systemd[1]: Reloading.
Jan 26 09:51:07 compute-0 systemd-sysv-generator[151828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:51:07 compute-0 systemd-rc-local-generator[151825]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:51:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:07 compute-0 sudo[151795]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:08 compute-0 ceph-mon[74456]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 09:51:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:08 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:08 compute-0 sudo[151986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktzokfzvhtzbkkhzshaegoxxzusluupr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421068.1039462-1233-161590945248146/AnsiballZ_stat.py'
Jan 26 09:51:08 compute-0 sudo[151986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:08 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:08 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:08 compute-0 python3.9[151988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:08 compute-0 sudo[151986]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:51:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:08.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:08 compute-0 sudo[152064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytnophjdmazkrnwftdrgkixqecieqwsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421068.1039462-1233-161590945248146/AnsiballZ_file.py'
Jan 26 09:51:08 compute-0 sudo[152064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:09 compute-0 python3.9[152066]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:09 compute-0 sudo[152064]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:09 : epoch 697738bc : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:51:09 compute-0 sudo[152216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfudndcisjnulaooebxgdyivykyoeyzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421069.4597957-1269-136734192492864/AnsiballZ_stat.py'
Jan 26 09:51:09 compute-0 sudo[152216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:10 compute-0 python3.9[152218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:10 compute-0 sudo[152216]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:10 compute-0 ceph-mon[74456]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:51:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:10 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:10 compute-0 sudo[152296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylmjorteiaxfgvnppkgmchnftnktteun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421069.4597957-1269-136734192492864/AnsiballZ_file.py'
Jan 26 09:51:10 compute-0 sudo[152296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:10 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:10 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:10 compute-0 python3.9[152298]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:10 compute-0 sudo[152296]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:51:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:10.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:11 compute-0 sudo[152449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-speagpqpvbdlvbljxqqdvaxwpzkystmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421071.0107386-1305-175872812240523/AnsiballZ_systemd.py'
Jan 26 09:51:11 compute-0 sudo[152449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:11 compute-0 python3.9[152451]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:51:11 compute-0 systemd[1]: Reloading.
Jan 26 09:51:11 compute-0 systemd-rc-local-generator[152479]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:51:11 compute-0 systemd-sysv-generator[152483]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:51:12 compute-0 systemd[1]: Starting Create netns directory...
Jan 26 09:51:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 09:51:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 09:51:12 compute-0 systemd[1]: Finished Create netns directory.
Jan 26 09:51:12 compute-0 sudo[152449]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:12 compute-0 ceph-mon[74456]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:51:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:12 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:12 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:12 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:51:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:51:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:12.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:51:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:12.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:13 compute-0 sudo[152644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rissizfubwypdnqognjyiaislxgubrta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421072.7170053-1335-25616313781170/AnsiballZ_file.py'
Jan 26 09:51:13 compute-0 sudo[152644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:13 compute-0 python3.9[152646]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:13 compute-0 sudo[152644]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:13 compute-0 sudo[152796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxdrakrsfzknunhlknsjeyczeacufcvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421073.5091012-1359-224936648950321/AnsiballZ_stat.py'
Jan 26 09:51:13 compute-0 sudo[152796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:13 compute-0 python3.9[152798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:14 compute-0 sudo[152796]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:14 compute-0 ceph-mon[74456]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:51:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:14 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:14 compute-0 sudo[152921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwjpfrufchbbqarwguupvfnfqzinbve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421073.5091012-1359-224936648950321/AnsiballZ_copy.py'
Jan 26 09:51:14 compute-0 sudo[152921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:14 compute-0 python3.9[152923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421073.5091012-1359-224936648950321/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:14 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:14 compute-0 sudo[152921]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:14 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:51:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:15 compute-0 sudo[153073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flxirqjtjiggzjcvchbelwkrasrufrou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421075.5060184-1410-178463504861549/AnsiballZ_file.py'
Jan 26 09:51:15 compute-0 sudo[153073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095115 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:51:15 compute-0 python3.9[153075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:16 compute-0 sudo[153073]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:16 compute-0 ceph-mon[74456]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:51:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:16 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:16 compute-0 sshd-session[153076]: Invalid user test from 157.245.76.178 port 39572
Jan 26 09:51:16 compute-0 sshd-session[153076]: Connection closed by invalid user test 157.245.76.178 port 39572 [preauth]
Jan 26 09:51:16 compute-0 sudo[153156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:51:16 compute-0 sudo[153156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:16 compute-0 sudo[153156]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:16 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:16 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:16] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:16] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:16 compute-0 sudo[153254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljbsghrrrwrrgnlcekvhlexlaluvqcjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421076.3533409-1434-245609153370365/AnsiballZ_file.py'
Jan 26 09:51:16 compute-0 sudo[153254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:51:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:16.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:16.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:16 compute-0 python3.9[153256]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:16 compute-0 sudo[153254]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:16.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:51:17 compute-0 sudo[153406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxvvrcrptytumvizhzwqbyipxilacxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421077.1993241-1458-266791177367709/AnsiballZ_stat.py'
Jan 26 09:51:17 compute-0 sudo[153406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:17 compute-0 python3.9[153408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:17 compute-0 sudo[153406]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:18 compute-0 sudo[153529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jybzlbxveaioeqsofxejbwcwknldfqae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421077.1993241-1458-266791177367709/AnsiballZ_copy.py'
Jan 26 09:51:18 compute-0 sudo[153529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:18 compute-0 python3.9[153531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421077.1993241-1458-266791177367709/.source.json _original_basename=.9xh39tcp follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:18 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:18 compute-0 ceph-mon[74456]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:51:18 compute-0 sudo[153529]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:18 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:51:18
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'volumes', '.nfs', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:51:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:18 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:51:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:51:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:18.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:51:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:51:19 compute-0 python3.9[153683]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:20 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:20 compute-0 ceph-mon[74456]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:51:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:20 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:20 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:51:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:51:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:20.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:51:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:20.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:21 compute-0 sudo[154106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihgdmsfmkzmlhdeyecxltiqpmfenozy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421080.992486-1578-257041732641731/AnsiballZ_container_config_data.py'
Jan 26 09:51:21 compute-0 sudo[154106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:21 compute-0 python3.9[154108]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 26 09:51:21 compute-0 sudo[154106]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:22 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ecc004440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:22 compute-0 ceph-mon[74456]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:51:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:22 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:22 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4002d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:22 compute-0 sudo[154263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqwpjwgpdjlhlxycruihzfwirogowgdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421082.1692605-1611-29793230079570/AnsiballZ_container_config_hash.py'
Jan 26 09:51:22 compute-0 sudo[154263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:51:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:22.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:22 compute-0 python3.9[154265]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 09:51:22 compute-0 sudo[154263]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:23 compute-0 sudo[154415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltfejhuxqupjfesijgkfvkqszppmtbyy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769421083.355886-1641-264869667559411/AnsiballZ_edpm_container_manage.py'
Jan 26 09:51:23 compute-0 sudo[154415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:24 compute-0 python3[154417]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 09:51:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:24 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea4000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:24 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:51:24 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:51:24 compute-0 ceph-mon[74456]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:51:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:24 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:24 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:51:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:24.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:24.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:26 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4002d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:26 compute-0 ceph-mon[74456]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:51:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:26 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:26 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:26] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:51:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:26] "GET /metrics HTTP/1.1" 200 48265 "" "Prometheus/2.51.0"
Jan 26 09:51:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:51:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:26.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:51:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:26.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:26.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:51:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:26.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:51:27 compute-0 ceph-mon[74456]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:28 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea8003d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:28 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ec4002d60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:28 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea40016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:28.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:28.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:29 compute-0 podman[154432]: 2026-01-26 09:51:29.678046837 +0000 UTC m=+5.437048709 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 26 09:51:29 compute-0 ceph-mon[74456]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:29 compute-0 podman[154559]: 2026-01-26 09:51:29.801672849 +0000 UTC m=+0.042972359 container create 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, managed_by=edpm_ansible, container_name=ovn_controller)
Jan 26 09:51:29 compute-0 podman[154559]: 2026-01-26 09:51:29.779055054 +0000 UTC m=+0.020354564 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 26 09:51:29 compute-0 python3[154417]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Jan 26 09:51:29 compute-0 sudo[154415]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:30 compute-0 sudo[154747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfapnjjqdlfgbpjvaykezjnktuhhzjzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421090.07141-1665-215172602755217/AnsiballZ_stat.py'
Jan 26 09:51:30 compute-0 sudo[154747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[140510]: 26/01/2026 09:51:30 : epoch 697738bc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9ea40016a0 fd 39 proxy ignored for local
Jan 26 09:51:30 compute-0 kernel: ganesha.nfsd[154188]: segfault at 50 ip 00007f9f541b432e sp 00007f9ebd7f9210 error 4 in libntirpc.so.5.8[7f9f54199000+2c000] likely on CPU 1 (core 0, socket 1)
Jan 26 09:51:30 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:51:30 compute-0 systemd[1]: Started Process Core Dump (PID 154750/UID 0).
Jan 26 09:51:30 compute-0 python3.9[154749]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:51:30 compute-0 sudo[154747]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:51:30 compute-0 sudo[154778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:51:30 compute-0 sudo[154778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:30 compute-0 sudo[154778]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:30.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:30 compute-0 sudo[154803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:51:30 compute-0 sudo[154803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:31 compute-0 sudo[154803]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:51:31 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:51:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:51:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:51:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:51:31 compute-0 sudo[154984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsvnbhklkngbjmtveqwidhcgwyvbwzei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421091.2462225-1692-228411872147943/AnsiballZ_file.py'
Jan 26 09:51:31 compute-0 sudo[154984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:31 compute-0 python3.9[154986]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:31 compute-0 sudo[154984]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:32 compute-0 sudo[155060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkpoqcatmdunspidhuhzuucgkmhxuysv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421091.2462225-1692-228411872147943/AnsiballZ_stat.py'
Jan 26 09:51:32 compute-0 sudo[155060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:32 compute-0 systemd-coredump[154751]: Process 140514 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f9f541b432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    #1  0x0000000000000000 n/a (n/a + 0x0)
                                                    #2  0x00007f9f541be900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:51:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:51:32 compute-0 ceph-mon[74456]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:51:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:51:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:51:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:51:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:51:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:51:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:51:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:51:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:51:32 compute-0 python3.9[155062]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:51:32 compute-0 sudo[155060]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:32 compute-0 systemd[1]: systemd-coredump@3-154750-0.service: Deactivated successfully.
Jan 26 09:51:32 compute-0 systemd[1]: systemd-coredump@3-154750-0.service: Consumed 1.338s CPU time.
Jan 26 09:51:32 compute-0 sudo[155064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:51:32 compute-0 sudo[155064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:32 compute-0 sudo[155064]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:32 compute-0 podman[155089]: 2026-01-26 09:51:32.275688258 +0000 UTC m=+0.033816820 container died 65f50e5443fc0a0f613b45e2608e94e4ee7e25dc951bd6d6085af5b9894254a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f3c61f8987b0f8da97f1719b6515d20f62debfec016d46543e7fe8089e2a854-merged.mount: Deactivated successfully.
Jan 26 09:51:32 compute-0 podman[155089]: 2026-01-26 09:51:32.326780388 +0000 UTC m=+0.084908930 container remove 65f50e5443fc0a0f613b45e2608e94e4ee7e25dc951bd6d6085af5b9894254a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:51:32 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:51:32 compute-0 sudo[155128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:51:32 compute-0 sudo[155128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:32 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:51:32 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.634s CPU time.
Jan 26 09:51:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:32 compute-0 podman[155325]: 2026-01-26 09:51:32.721684747 +0000 UTC m=+0.039601337 container create b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:51:32 compute-0 sudo[155365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwzgtrearhigtyclrjsuhyyqjippgwpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421092.2715564-1692-52758616495715/AnsiballZ_copy.py'
Jan 26 09:51:32 compute-0 sudo[155365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:32 compute-0 systemd[1]: Started libpod-conmon-b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914.scope.
Jan 26 09:51:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:51:32 compute-0 podman[155325]: 2026-01-26 09:51:32.791667151 +0000 UTC m=+0.109583751 container init b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:51:32 compute-0 podman[155325]: 2026-01-26 09:51:32.798405484 +0000 UTC m=+0.116322064 container start b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_galileo, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 09:51:32 compute-0 podman[155325]: 2026-01-26 09:51:32.703937484 +0000 UTC m=+0.021854084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:51:32 compute-0 podman[155325]: 2026-01-26 09:51:32.803155033 +0000 UTC m=+0.121071613 container attach b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:51:32 compute-0 awesome_galileo[155370]: 167 167
Jan 26 09:51:32 compute-0 systemd[1]: libpod-b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914.scope: Deactivated successfully.
Jan 26 09:51:32 compute-0 podman[155325]: 2026-01-26 09:51:32.804629553 +0000 UTC m=+0.122546143 container died b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_galileo, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b49deaa39bb9b86603e45eeb0691273680eedf6d5817def7f819c19e91413a88-merged.mount: Deactivated successfully.
Jan 26 09:51:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:32.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:32 compute-0 podman[155325]: 2026-01-26 09:51:32.838947077 +0000 UTC m=+0.156863657 container remove b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_galileo, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:51:32 compute-0 systemd[1]: libpod-conmon-b07eaaafdb47ae6559dbb4a3c072bd4fa15cc91b488c665c1fc6c8e2e4343914.scope: Deactivated successfully.
Jan 26 09:51:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:32.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:32 compute-0 python3.9[155369]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769421092.2715564-1692-52758616495715/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:32 compute-0 sudo[155365]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:32 compute-0 podman[155392]: 2026-01-26 09:51:32.997878758 +0000 UTC m=+0.054127392 container create e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:51:33 compute-0 systemd[1]: Started libpod-conmon-e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd.scope.
Jan 26 09:51:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e770ef9b3457e655deea38240b5963996f59f9f041011c431f3ff14457d5050a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:33 compute-0 podman[155392]: 2026-01-26 09:51:32.981222835 +0000 UTC m=+0.037471489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e770ef9b3457e655deea38240b5963996f59f9f041011c431f3ff14457d5050a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e770ef9b3457e655deea38240b5963996f59f9f041011c431f3ff14457d5050a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e770ef9b3457e655deea38240b5963996f59f9f041011c431f3ff14457d5050a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e770ef9b3457e655deea38240b5963996f59f9f041011c431f3ff14457d5050a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:33 compute-0 podman[155392]: 2026-01-26 09:51:33.085997495 +0000 UTC m=+0.142246149 container init e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:51:33 compute-0 podman[155392]: 2026-01-26 09:51:33.093478279 +0000 UTC m=+0.149726913 container start e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermi, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:51:33 compute-0 podman[155392]: 2026-01-26 09:51:33.0972338 +0000 UTC m=+0.153482494 container attach e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:51:33 compute-0 sudo[155487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atmtipclnkcygosndjghfhjbqforijcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421092.2715564-1692-52758616495715/AnsiballZ_systemd.py'
Jan 26 09:51:33 compute-0 sudo[155487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:33 compute-0 musing_fermi[155427]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:51:33 compute-0 musing_fermi[155427]: --> All data devices are unavailable
Jan 26 09:51:33 compute-0 systemd[1]: libpod-e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd.scope: Deactivated successfully.
Jan 26 09:51:33 compute-0 podman[155392]: 2026-01-26 09:51:33.408434483 +0000 UTC m=+0.464683117 container died e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermi, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 09:51:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e770ef9b3457e655deea38240b5963996f59f9f041011c431f3ff14457d5050a-merged.mount: Deactivated successfully.
Jan 26 09:51:33 compute-0 podman[155392]: 2026-01-26 09:51:33.448468262 +0000 UTC m=+0.504716896 container remove e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 09:51:33 compute-0 systemd[1]: libpod-conmon-e86a9bf28e3e11e2e5da83561c7f82261d6cb27b319a02d6b796fa89a0d2bdcd.scope: Deactivated successfully.
Jan 26 09:51:33 compute-0 python3.9[155489]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 09:51:33 compute-0 sudo[155128]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:33 compute-0 systemd[1]: Reloading.
Jan 26 09:51:33 compute-0 systemd-sysv-generator[155563]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:51:33 compute-0 systemd-rc-local-generator[155558]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:51:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:51:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:51:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:51:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:51:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:33 compute-0 sudo[155512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:51:33 compute-0 sudo[155512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:33 compute-0 sudo[155512]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:33 compute-0 sudo[155487]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:33 compute-0 sudo[155570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:51:33 compute-0 sudo[155570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:34 compute-0 sudo[155702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpirivqnlaobtbznpkqzujgfrdkjdgcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421092.2715564-1692-52758616495715/AnsiballZ_systemd.py'
Jan 26 09:51:34 compute-0 sudo[155702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:34 compute-0 podman[155712]: 2026-01-26 09:51:34.172178473 +0000 UTC m=+0.044028178 container create 14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:51:34 compute-0 systemd[1]: Started libpod-conmon-14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1.scope.
Jan 26 09:51:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:51:34 compute-0 podman[155712]: 2026-01-26 09:51:34.239533305 +0000 UTC m=+0.111383030 container init 14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_colden, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:51:34 compute-0 podman[155712]: 2026-01-26 09:51:34.245866367 +0000 UTC m=+0.117716072 container start 14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 09:51:34 compute-0 podman[155712]: 2026-01-26 09:51:34.152431746 +0000 UTC m=+0.024281481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:51:34 compute-0 podman[155712]: 2026-01-26 09:51:34.248995652 +0000 UTC m=+0.120845377 container attach 14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:51:34 compute-0 elastic_colden[155728]: 167 167
Jan 26 09:51:34 compute-0 systemd[1]: libpod-14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1.scope: Deactivated successfully.
Jan 26 09:51:34 compute-0 conmon[155728]: conmon 14855a15ae6c814c35f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1.scope/container/memory.events
Jan 26 09:51:34 compute-0 podman[155733]: 2026-01-26 09:51:34.288592139 +0000 UTC m=+0.022916624 container died 14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-948b34b77d9c5e74abcf1e8aa31a097d424a10a07dfeba70dbe14c1225826943-merged.mount: Deactivated successfully.
Jan 26 09:51:34 compute-0 podman[155733]: 2026-01-26 09:51:34.3224744 +0000 UTC m=+0.056798895 container remove 14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_colden, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:51:34 compute-0 systemd[1]: libpod-conmon-14855a15ae6c814c35f93e9c7d6b5899eec400230b916c5d8253317ef80aebf1.scope: Deactivated successfully.
Jan 26 09:51:34 compute-0 python3.9[155711]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:51:34 compute-0 systemd[1]: Reloading.
Jan 26 09:51:34 compute-0 podman[155758]: 2026-01-26 09:51:34.500603155 +0000 UTC m=+0.057820834 container create 4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:51:34 compute-0 systemd-rc-local-generator[155798]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:51:34 compute-0 systemd-sysv-generator[155801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:51:34 compute-0 podman[155758]: 2026-01-26 09:51:34.481918306 +0000 UTC m=+0.039135995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:51:34 compute-0 ceph-mon[74456]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:51:34 compute-0 systemd[1]: Started libpod-conmon-4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c.scope.
Jan 26 09:51:34 compute-0 systemd[1]: Starting ovn_controller container...
Jan 26 09:51:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f072ab1436feb8fc623ea836516f0a8bc19c807c8aaf12790c28c6c78611513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f072ab1436feb8fc623ea836516f0a8bc19c807c8aaf12790c28c6c78611513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f072ab1436feb8fc623ea836516f0a8bc19c807c8aaf12790c28c6c78611513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f072ab1436feb8fc623ea836516f0a8bc19c807c8aaf12790c28c6c78611513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:34 compute-0 podman[155758]: 2026-01-26 09:51:34.787135356 +0000 UTC m=+0.344353065 container init 4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:51:34 compute-0 podman[155758]: 2026-01-26 09:51:34.802466713 +0000 UTC m=+0.359684392 container start 4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 09:51:34 compute-0 podman[155758]: 2026-01-26 09:51:34.806650487 +0000 UTC m=+0.363868196 container attach 4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 09:51:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:34.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:51:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764f790bb4961107a13b002100225a8fff2c052617434815a3c2d64f5fa61c3f/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:51:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:34.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:51:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9.
Jan 26 09:51:34 compute-0 podman[155817]: 2026-01-26 09:51:34.893240792 +0000 UTC m=+0.134488588 container init 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 26 09:51:34 compute-0 ovn_controller[155832]: + sudo -E kolla_set_configs
Jan 26 09:51:34 compute-0 podman[155817]: 2026-01-26 09:51:34.936160059 +0000 UTC m=+0.177407815 container start 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 09:51:34 compute-0 edpm-start-podman-container[155817]: ovn_controller
Jan 26 09:51:34 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 26 09:51:34 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 26 09:51:34 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 26 09:51:35 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 26 09:51:35 compute-0 edpm-start-podman-container[155815]: Creating additional drop-in dependency for "ovn_controller" (6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9)
Jan 26 09:51:35 compute-0 systemd[155871]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 26 09:51:35 compute-0 podman[155839]: 2026-01-26 09:51:35.02558565 +0000 UTC m=+0.088128596 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 09:51:35 compute-0 systemd[1]: 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9-2b8e4e42e0137099.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 09:51:35 compute-0 systemd[1]: 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9-2b8e4e42e0137099.service: Failed with result 'exit-code'.
Jan 26 09:51:35 compute-0 systemd[1]: Reloading.
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]: {
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:     "0": [
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:         {
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "devices": [
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "/dev/loop3"
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             ],
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "lv_name": "ceph_lv0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "lv_size": "21470642176",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "name": "ceph_lv0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "tags": {
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.cluster_name": "ceph",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.crush_device_class": "",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.encrypted": "0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.osd_id": "0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.type": "block",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.vdo": "0",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:                 "ceph.with_tpm": "0"
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             },
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "type": "block",
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:             "vg_name": "ceph_vg0"
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:         }
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]:     ]
Jan 26 09:51:35 compute-0 vigorous_ellis[155812]: }
Jan 26 09:51:35 compute-0 podman[155758]: 2026-01-26 09:51:35.098245917 +0000 UTC m=+0.655463626 container died 4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:51:35 compute-0 systemd-rc-local-generator[155915]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:51:35 compute-0 systemd-sysv-generator[155921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:51:35 compute-0 systemd[155871]: Queued start job for default target Main User Target.
Jan 26 09:51:35 compute-0 systemd[155871]: Created slice User Application Slice.
Jan 26 09:51:35 compute-0 systemd[155871]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 26 09:51:35 compute-0 systemd[155871]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 09:51:35 compute-0 systemd[155871]: Reached target Paths.
Jan 26 09:51:35 compute-0 systemd[155871]: Reached target Timers.
Jan 26 09:51:35 compute-0 systemd[155871]: Starting D-Bus User Message Bus Socket...
Jan 26 09:51:35 compute-0 systemd[155871]: Starting Create User's Volatile Files and Directories...
Jan 26 09:51:35 compute-0 systemd[155871]: Finished Create User's Volatile Files and Directories.
Jan 26 09:51:35 compute-0 systemd[155871]: Listening on D-Bus User Message Bus Socket.
Jan 26 09:51:35 compute-0 systemd[155871]: Reached target Sockets.
Jan 26 09:51:35 compute-0 systemd[155871]: Reached target Basic System.
Jan 26 09:51:35 compute-0 systemd[155871]: Reached target Main User Target.
Jan 26 09:51:35 compute-0 systemd[155871]: Startup finished in 174ms.
Jan 26 09:51:35 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 26 09:51:35 compute-0 systemd[1]: Started ovn_controller container.
Jan 26 09:51:35 compute-0 systemd[1]: libpod-4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c.scope: Deactivated successfully.
Jan 26 09:51:35 compute-0 systemd[1]: Started Session c1 of User root.
Jan 26 09:51:35 compute-0 sudo[155702]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:35 compute-0 ovn_controller[155832]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 09:51:35 compute-0 ovn_controller[155832]: INFO:__main__:Validating config file
Jan 26 09:51:35 compute-0 ovn_controller[155832]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 09:51:35 compute-0 ovn_controller[155832]: INFO:__main__:Writing out command to execute
Jan 26 09:51:35 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: ++ cat /run_command
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + ARGS=
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + sudo kolla_copy_cacerts
Jan 26 09:51:35 compute-0 systemd[1]: Started Session c2 of User root.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + [[ ! -n '' ]]
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + . kolla_extend_start
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 26 09:51:35 compute-0 ovn_controller[155832]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + umask 0022
Jan 26 09:51:35 compute-0 ovn_controller[155832]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 26 09:51:35 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.4874] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.4882] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <warn>  [1769421095.4885] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.4895] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.4906] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.4911] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 09:51:35 compute-0 kernel: br-int: entered promiscuous mode
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.5176] manager: (ovn-80993c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 26 09:51:35 compute-0 systemd-udevd[155980]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 09:51:35 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.5337] device (genev_sys_6081): carrier: link connected
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 09:51:35 compute-0 NetworkManager[48970]: <info>  [1769421095.5341] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 26 09:51:35 compute-0 ovn_controller[155832]: 2026-01-26T09:51:35Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 09:51:35 compute-0 systemd-udevd[155982]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 09:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f072ab1436feb8fc623ea836516f0a8bc19c807c8aaf12790c28c6c78611513-merged.mount: Deactivated successfully.
Jan 26 09:51:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095136 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:51:36 compute-0 sudo[155994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:51:36 compute-0 sudo[155994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:36 compute-0 sudo[155994]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:36 compute-0 ceph-mon[74456]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:51:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:36] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:36 compute-0 podman[155758]: 2026-01-26 09:51:36.683475067 +0000 UTC m=+2.240692766 container remove 4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ellis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 09:51:36 compute-0 systemd[1]: libpod-conmon-4e087b1ecfb9dfdf491e581744d80223fdf09330ac1a94257c42ace62f2e711c.scope: Deactivated successfully.
Jan 26 09:51:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:36 compute-0 sudo[155570]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:36 compute-0 sudo[156019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:51:36 compute-0 sudo[156019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:36 compute-0 sudo[156019]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:36.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:36 compute-0 sudo[156044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:51:36 compute-0 sudo[156044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:51:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:36.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:51:36 compute-0 NetworkManager[48970]: <info>  [1769421096.9435] manager: (ovn-5f259f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 26 09:51:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:36.991Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:51:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:36.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:51:37 compute-0 NetworkManager[48970]: <info>  [1769421097.0379] manager: (ovn-8128a1-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 26 09:51:37 compute-0 podman[156155]: 2026-01-26 09:51:37.27857046 +0000 UTC m=+0.046146336 container create e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:51:37 compute-0 systemd[1]: Started libpod-conmon-e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7.scope.
Jan 26 09:51:37 compute-0 podman[156155]: 2026-01-26 09:51:37.262106102 +0000 UTC m=+0.029681998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:51:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:51:37 compute-0 podman[156155]: 2026-01-26 09:51:37.367440597 +0000 UTC m=+0.135016493 container init e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 09:51:37 compute-0 podman[156155]: 2026-01-26 09:51:37.373025118 +0000 UTC m=+0.140600994 container start e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:51:37 compute-0 pensive_chatelet[156177]: 167 167
Jan 26 09:51:37 compute-0 podman[156155]: 2026-01-26 09:51:37.376375089 +0000 UTC m=+0.143950995 container attach e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_chatelet, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:51:37 compute-0 systemd[1]: libpod-e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7.scope: Deactivated successfully.
Jan 26 09:51:37 compute-0 podman[156155]: 2026-01-26 09:51:37.377312705 +0000 UTC m=+0.144888581 container died e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_chatelet, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 09:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc218180c5e302c51b11d3a3290b8f83a120843661e677d25f2c78c7ab67a3ba-merged.mount: Deactivated successfully.
Jan 26 09:51:37 compute-0 podman[156155]: 2026-01-26 09:51:37.411118154 +0000 UTC m=+0.178694030 container remove e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:51:37 compute-0 systemd[1]: libpod-conmon-e71fe035aab4f2d206f2db4053bfc667255531a87b132e582216bfbf9fd0e3f7.scope: Deactivated successfully.
Jan 26 09:51:37 compute-0 podman[156224]: 2026-01-26 09:51:37.583485302 +0000 UTC m=+0.059142260 container create 1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_fermat, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 09:51:37 compute-0 systemd[1]: Started libpod-conmon-1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8.scope.
Jan 26 09:51:37 compute-0 ceph-mon[74456]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:51:37 compute-0 podman[156224]: 2026-01-26 09:51:37.559654784 +0000 UTC m=+0.035311732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1194a52947c869ce046f23b119b79606375eca5667d5504630d2ef29abccbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1194a52947c869ce046f23b119b79606375eca5667d5504630d2ef29abccbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1194a52947c869ce046f23b119b79606375eca5667d5504630d2ef29abccbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1194a52947c869ce046f23b119b79606375eca5667d5504630d2ef29abccbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:37 compute-0 podman[156224]: 2026-01-26 09:51:37.669851381 +0000 UTC m=+0.145508299 container init 1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_fermat, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:51:37 compute-0 podman[156224]: 2026-01-26 09:51:37.676882672 +0000 UTC m=+0.152539590 container start 1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:51:37 compute-0 podman[156224]: 2026-01-26 09:51:37.680301774 +0000 UTC m=+0.155958692 container attach 1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:51:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:37 compute-0 python3.9[156288]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 09:51:38 compute-0 lvm[156389]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:51:38 compute-0 lvm[156389]: VG ceph_vg0 finished
Jan 26 09:51:38 compute-0 inspiring_fermat[156291]: {}
Jan 26 09:51:38 compute-0 systemd[1]: libpod-1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8.scope: Deactivated successfully.
Jan 26 09:51:38 compute-0 podman[156224]: 2026-01-26 09:51:38.327897036 +0000 UTC m=+0.803553944 container died 1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 09:51:38 compute-0 systemd[1]: libpod-1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8.scope: Consumed 1.017s CPU time.
Jan 26 09:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c1194a52947c869ce046f23b119b79606375eca5667d5504630d2ef29abccbc-merged.mount: Deactivated successfully.
Jan 26 09:51:38 compute-0 podman[156224]: 2026-01-26 09:51:38.375641664 +0000 UTC m=+0.851298582 container remove 1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_fermat, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 09:51:38 compute-0 systemd[1]: libpod-conmon-1971afad78275a7eb52405939399c5dc44fc5b7c4ce0ad0bd80eac1ac8a353a8.scope: Deactivated successfully.
Jan 26 09:51:38 compute-0 sudo[156044]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:51:38 compute-0 sudo[156530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjqcndvyhlpncsammpmmrezlncwcafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421098.412155-1827-188291072375907/AnsiballZ_stat.py'
Jan 26 09:51:38 compute-0 sudo[156530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:38.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:38 compute-0 python3.9[156532]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:38 compute-0 sudo[156530]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:51:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:38.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:51:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:51:39 compute-0 sudo[156653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhzbjbtgnvvbwtsnlabwjywcfiasrtpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421098.412155-1827-188291072375907/AnsiballZ_copy.py'
Jan 26 09:51:39 compute-0 sudo[156653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:39 compute-0 python3.9[156655]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421098.412155-1827-188291072375907/.source.yaml _original_basename=.agvr_iww follow=False checksum=da37636c5844ad86706b9cbdcceae3b87fc97017 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:51:39 compute-0 sudo[156653]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:39 compute-0 sudo[156680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:51:39 compute-0 sudo[156680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:39 compute-0 sudo[156680]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:40 compute-0 ceph-mon[74456]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:51:40 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:40 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:51:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:51:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:40 compute-0 sudo[156832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhlwltorleioojmebfnbyrxcchpaigof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421100.6295755-1872-262688020534519/AnsiballZ_command.py'
Jan 26 09:51:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:40.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:40 compute-0 sudo[156832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:41 compute-0 python3.9[156834]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:51:41 compute-0 ovs-vsctl[156835]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 26 09:51:41 compute-0 sudo[156832]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095141 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:51:42 compute-0 ceph-mon[74456]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:51:42 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 4.
Jan 26 09:51:42 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:51:42 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.634s CPU time.
Jan 26 09:51:42 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:51:42 compute-0 sudo[156987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erkulbhukccbuzlbxefxipidhaocipnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421101.7072394-1896-76139136296804/AnsiballZ_command.py'
Jan 26 09:51:42 compute-0 sudo[156987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:42 compute-0 python3.9[156991]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:51:42 compute-0 ovs-vsctl[157043]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 26 09:51:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:51:42 compute-0 sudo[156987]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:42 compute-0 podman[157036]: 2026-01-26 09:51:42.750678825 +0000 UTC m=+0.053530971 container create 642f16de31d67c9f41ad4718d33929158349fb8d206bb84571a9ac851b212557 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 09:51:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5759c5fda7426fee3583209ebdf40d78e6906c490a2e4800c31c728b544034/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5759c5fda7426fee3583209ebdf40d78e6906c490a2e4800c31c728b544034/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5759c5fda7426fee3583209ebdf40d78e6906c490a2e4800c31c728b544034/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f5759c5fda7426fee3583209ebdf40d78e6906c490a2e4800c31c728b544034/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:51:42 compute-0 podman[157036]: 2026-01-26 09:51:42.813736644 +0000 UTC m=+0.116588810 container init 642f16de31d67c9f41ad4718d33929158349fb8d206bb84571a9ac851b212557 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:51:42 compute-0 podman[157036]: 2026-01-26 09:51:42.722886102 +0000 UTC m=+0.025738298 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:51:42 compute-0 podman[157036]: 2026-01-26 09:51:42.818800391 +0000 UTC m=+0.121652537 container start 642f16de31d67c9f41ad4718d33929158349fb8d206bb84571a9ac851b212557 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:51:42 compute-0 bash[157036]: 642f16de31d67c9f41ad4718d33929158349fb8d206bb84571a9ac851b212557
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:51:42 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:51:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:42.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:51:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:42.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:42 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:51:43 compute-0 sudo[157246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhebshexnnkeioiczzsjwmmkilorowod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421103.369691-1938-119914134289977/AnsiballZ_command.py'
Jan 26 09:51:43 compute-0 sudo[157246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:43 compute-0 python3.9[157248]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:51:43 compute-0 ovs-vsctl[157249]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 26 09:51:43 compute-0 sudo[157246]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:44 compute-0 ceph-mon[74456]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:51:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:51:44 compute-0 sshd-session[143943]: Connection closed by 192.168.122.30 port 42048
Jan 26 09:51:44 compute-0 sshd-session[143940]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:51:44 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 26 09:51:44 compute-0 systemd[1]: session-50.scope: Consumed 1min 2.644s CPU time.
Jan 26 09:51:44 compute-0 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Jan 26 09:51:44 compute-0 systemd-logind[787]: Removed session 50.
Jan 26 09:51:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:44.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:45 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 26 09:51:45 compute-0 systemd[155871]: Activating special unit Exit the Session...
Jan 26 09:51:45 compute-0 systemd[155871]: Stopped target Main User Target.
Jan 26 09:51:45 compute-0 systemd[155871]: Stopped target Basic System.
Jan 26 09:51:45 compute-0 systemd[155871]: Stopped target Paths.
Jan 26 09:51:45 compute-0 systemd[155871]: Stopped target Sockets.
Jan 26 09:51:45 compute-0 systemd[155871]: Stopped target Timers.
Jan 26 09:51:45 compute-0 systemd[155871]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 26 09:51:45 compute-0 systemd[155871]: Closed D-Bus User Message Bus Socket.
Jan 26 09:51:45 compute-0 systemd[155871]: Stopped Create User's Volatile Files and Directories.
Jan 26 09:51:45 compute-0 systemd[155871]: Removed slice User Application Slice.
Jan 26 09:51:45 compute-0 systemd[155871]: Reached target Shutdown.
Jan 26 09:51:45 compute-0 systemd[155871]: Finished Exit the Session.
Jan 26 09:51:45 compute-0 systemd[155871]: Reached target Exit the Session.
Jan 26 09:51:45 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 26 09:51:45 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 26 09:51:45 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 26 09:51:45 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 26 09:51:45 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 26 09:51:45 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 26 09:51:45 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 26 09:51:46 compute-0 ceph-mon[74456]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:51:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:46] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:46] "GET /metrics HTTP/1.1" 200 48259 "" "Prometheus/2.51.0"
Jan 26 09:51:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:51:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:46.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:51:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:46.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:51:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:46.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:51:47 compute-0 ceph-mon[74456]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:51:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:51:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:51:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:51:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:51:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:51:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:51:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:51:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:51:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:51:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:48.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:48.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:48 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:51:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:48 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:51:49 compute-0 sshd-session[157282]: Accepted publickey for zuul from 192.168.122.30 port 59692 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:51:49 compute-0 systemd-logind[787]: New session 52 of user zuul.
Jan 26 09:51:49 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 26 09:51:49 compute-0 sshd-session[157282]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:51:49 compute-0 ceph-mon[74456]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:51:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 766 B/s wr, 2 op/s
Jan 26 09:51:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:50.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:50 compute-0 python3.9[157437]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:51:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:50.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:51 compute-0 sudo[157591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kinxihgxlrkmubxyljfjafobhybiowby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421111.4448624-57-166344955613049/AnsiballZ_file.py'
Jan 26 09:51:51 compute-0 sudo[157591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:51 compute-0 ceph-mon[74456]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 766 B/s wr, 2 op/s
Jan 26 09:51:52 compute-0 python3.9[157593]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:52 compute-0 sudo[157591]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:52 compute-0 sudo[157745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bczkjldmqbmelabbftmhqrnmwvigzbjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421112.2234101-57-203471693433182/AnsiballZ_file.py'
Jan 26 09:51:52 compute-0 sudo[157745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:52 compute-0 python3.9[157747]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:52 compute-0 sudo[157745]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 766 B/s wr, 2 op/s
Jan 26 09:51:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:52.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:52.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:53 compute-0 sudo[157897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfuyworurksnnydpbotjzzyqthabigav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421112.7528753-57-81962331230366/AnsiballZ_file.py'
Jan 26 09:51:53 compute-0 sudo[157897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:53 compute-0 python3.9[157899]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:53 compute-0 sudo[157897]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:53 compute-0 sudo[158050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdhmyrqvpjjdggfsnydcdpcdvumnqink ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421113.3508835-57-166713757998742/AnsiballZ_file.py'
Jan 26 09:51:53 compute-0 sudo[158050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:53 compute-0 python3.9[158052]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:53 compute-0 sudo[158050]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:53 compute-0 ceph-mon[74456]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 766 B/s wr, 2 op/s
Jan 26 09:51:54 compute-0 sudo[158204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtkogtodtbagyfmtkpuwmshmpskhuiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421114.0697865-57-222910145057800/AnsiballZ_file.py'
Jan 26 09:51:54 compute-0 sudo[158204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:54 compute-0 python3.9[158206]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:51:54 compute-0 sudo[158204]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1022 B/s wr, 3 op/s
Jan 26 09:51:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:54.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:54.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:51:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:54 : epoch 6977392e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:51:56 compute-0 ceph-mon[74456]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1022 B/s wr, 3 op/s
Jan 26 09:51:56 compute-0 python3.9[158368]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:51:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:56 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1274000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:56 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:56] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Jan 26 09:51:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:51:56] "GET /metrics HTTP/1.1" 200 48261 "" "Prometheus/2.51.0"
Jan 26 09:51:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:56 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 936 B/s wr, 2 op/s
Jan 26 09:51:56 compute-0 sshd-session[158421]: Invalid user test from 157.245.76.178 port 44136
Jan 26 09:51:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:56 compute-0 sshd-session[158421]: Connection closed by invalid user test 157.245.76.178 port 44136 [preauth]
Jan 26 09:51:56 compute-0 sudo[158499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:51:56 compute-0 sudo[158499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:51:56 compute-0 sudo[158548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kecemerkqoxergrvozqlrqneeexuxqrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421116.468205-189-188099424709205/AnsiballZ_seboolean.py'
Jan 26 09:51:56 compute-0 sudo[158499]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:56 compute-0 sudo[158548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:51:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:56.993Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:51:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:51:56.994Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:51:57 compute-0 python3.9[158552]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 26 09:51:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:51:57 compute-0 sudo[158548]: pam_unix(sudo:session): session closed for user root
Jan 26 09:51:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:57 : epoch 6977392e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:51:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:57 : epoch 6977392e : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:51:58 compute-0 ceph-mon[74456]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 936 B/s wr, 2 op/s
Jan 26 09:51:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095158 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:51:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:58 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:58 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:51:58 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:51:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 936 B/s wr, 2 op/s
Jan 26 09:51:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:51:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:51:58.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:51:58 compute-0 python3.9[158704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:51:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:51:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:51:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:51:58.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:51:59 compute-0 python3.9[158825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421118.137377-213-153971322949592/.source follow=False _original_basename=haproxy.j2 checksum=1daf285be4abb25cbd7ba376734de140aac9aefe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:00 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:00 compute-0 python3.9[158975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:00 : epoch 6977392e : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:52:00 compute-0 ceph-mon[74456]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 936 B/s wr, 2 op/s
Jan 26 09:52:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:00 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:00 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Jan 26 09:52:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:00.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:00.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:01 compute-0 python3.9[159098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421119.9339058-258-30421698143702/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:01 compute-0 ceph-mon[74456]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Jan 26 09:52:01 compute-0 sudo[159248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btirpxbkcyujhobehusnnxrljtsbzvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421121.538431-309-199183483786932/AnsiballZ_setup.py'
Jan 26 09:52:01 compute-0 sudo[159248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:02 compute-0 python3.9[159250]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:52:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:02 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:02 compute-0 sudo[159248]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:02 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12480016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:02 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:52:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:02.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:02.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:02 compute-0 sudo[159334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmsuksdshffimvzrukzaolomsfmajofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421121.538431-309-199183483786932/AnsiballZ_dnf.py'
Jan 26 09:52:02 compute-0 sudo[159334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:03 compute-0 python3.9[159336]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:52:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:52:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:03 compute-0 ceph-mon[74456]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:52:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095203 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:52:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:04 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:04 compute-0 sudo[159334]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:04 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:04 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:52:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:04.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:05 compute-0 ovn_controller[155832]: 2026-01-26T09:52:05Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Jan 26 09:52:05 compute-0 ovn_controller[155832]: 2026-01-26T09:52:05Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Jan 26 09:52:05 compute-0 podman[159416]: 2026-01-26 09:52:05.188380413 +0000 UTC m=+0.115670816 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 09:52:05 compute-0 sudo[159513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvvruhdoqpoqbmbihmonjqclhnddkvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421124.677367-345-34077171402477/AnsiballZ_systemd.py'
Jan 26 09:52:05 compute-0 sudo[159513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:05 compute-0 python3.9[159515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:52:05 compute-0 sudo[159513]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:05 compute-0 ceph-mon[74456]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:52:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:06 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:06 compute-0 python3.9[159670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:06 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:52:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:52:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:06 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:52:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:06.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:52:06.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:52:07 compute-0 python3.9[159791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421125.9586964-369-121652808604921/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:07 compute-0 python3.9[159941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:07 compute-0 ceph-mon[74456]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:52:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:08 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:08 compute-0 python3.9[160062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421127.2864823-369-239072008345088/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:08 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:08 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:52:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:08.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:08.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:09 compute-0 python3.9[160214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:09 compute-0 ceph-mon[74456]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:52:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:10 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:10 compute-0 python3.9[160335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421129.2997448-501-174437843764846/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:10 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:10 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:52:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:10.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:52:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:52:11 compute-0 python3.9[160487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:12 compute-0 python3.9[160608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421130.6438072-501-268632716336686/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:12 compute-0 ceph-mon[74456]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:52:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:12 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:12 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:12 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12480032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:52:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:12 compute-0 python3.9[160760]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:52:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:12.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:13 compute-0 sudo[160912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddpcmrsniptuultndraubzplefshbioz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421133.1614337-615-75028641344827/AnsiballZ_file.py'
Jan 26 09:52:13 compute-0 sudo[160912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:13 compute-0 python3.9[160914]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:13 compute-0 sudo[160912]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:14 compute-0 sudo[161066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayfksubbjrcqcocpmpjfxekndkwpzeqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421133.9389312-639-108236753077168/AnsiballZ_stat.py'
Jan 26 09:52:14 compute-0 sudo[161066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:14 compute-0 ceph-mon[74456]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:52:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:14 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:14 compute-0 python3.9[161068]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:14 compute-0 sudo[161066]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:14 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:14 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:52:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:14 compute-0 sudo[161144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhljaxbcquuxrkhmxrtihdgvtmzcmcyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421133.9389312-639-108236753077168/AnsiballZ_file.py'
Jan 26 09:52:14 compute-0 sudo[161144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:15 compute-0 python3.9[161146]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:15 compute-0 sudo[161144]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:15 compute-0 sudo[161296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnxlbbjhvefcvefdasfburhfubzyvrny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421135.2649736-639-6329542890137/AnsiballZ_stat.py'
Jan 26 09:52:15 compute-0 sudo[161296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:15 compute-0 python3.9[161298]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:15 compute-0 sudo[161296]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:16 compute-0 sudo[161374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olgsbjidanyqmvciboibfitlsflrgsad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421135.2649736-639-6329542890137/AnsiballZ_file.py'
Jan 26 09:52:16 compute-0 sudo[161374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:16 compute-0 python3.9[161376]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:16 compute-0 sudo[161374]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:16 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f12480032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:16 compute-0 ceph-mon[74456]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:52:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:16] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:52:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:16] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:52:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:16 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:16 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:52:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:16.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:52:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:16.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:16 compute-0 sudo[161528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlplzgxxqkuqywlfyzseghizooltsvmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421136.5933304-708-152097556968806/AnsiballZ_file.py'
Jan 26 09:52:16 compute-0 sudo[161528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:52:16.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:52:17 compute-0 sudo[161530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:52:17 compute-0 sudo[161530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:17 compute-0 sudo[161530]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:17 compute-0 python3.9[161531]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:17 compute-0 sudo[161528]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:17 compute-0 sudo[161705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqvixahytmuczwlkljcjgmrswvbtrnfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421137.4412832-732-187650419356190/AnsiballZ_stat.py'
Jan 26 09:52:17 compute-0 sudo[161705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:17 compute-0 python3.9[161707]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:18 compute-0 sudo[161705]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:18 compute-0 sudo[161785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhifxzevfqfxsehnsuhttiqwhglcktw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421137.4412832-732-187650419356190/AnsiballZ_file.py'
Jan 26 09:52:18 compute-0 sudo[161785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:18 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:18 compute-0 ceph-mon[74456]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:18 compute-0 python3.9[161787]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:18 compute-0 sudo[161785]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:52:18
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'volumes', '.nfs', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.log', 'images']
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:52:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:18 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:18 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:52:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:52:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:52:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:18.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:52:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:18.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:52:19 compute-0 sudo[161937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imlbxmjrdqnjniluavtdereeqkcgvncn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421138.7533941-768-88174483338001/AnsiballZ_stat.py'
Jan 26 09:52:19 compute-0 sudo[161937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:19 compute-0 python3.9[161939]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:19 compute-0 sudo[161937]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:19 compute-0 sudo[162015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddyenvvxspmdyzefaumfofamuhobpjnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421138.7533941-768-88174483338001/AnsiballZ_file.py'
Jan 26 09:52:19 compute-0 sudo[162015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:19 compute-0 python3.9[162017]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:19 compute-0 sudo[162015]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:20 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:20 compute-0 ceph-mon[74456]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:20 compute-0 sudo[162169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexuejvcmzhaxjjzxjokzqigkvetizkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421140.0517087-804-266261799621536/AnsiballZ_systemd.py'
Jan 26 09:52:20 compute-0 sudo[162169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:20 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:20 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:20 compute-0 python3.9[162171]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:52:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Jan 26 09:52:20 compute-0 systemd[1]: Reloading.
Jan 26 09:52:20 compute-0 systemd-rc-local-generator[162189]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:52:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:52:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:52:20 compute-0 systemd-sysv-generator[162198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:52:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:52:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:20.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:52:21 compute-0 sudo[162169]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:21 compute-0 ceph-mon[74456]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Jan 26 09:52:21 compute-0 sudo[162357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcafwsmzqadtsfawstyhcxroxlfenybn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421141.3719811-828-96766401093093/AnsiballZ_stat.py'
Jan 26 09:52:21 compute-0 sudo[162357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:21 compute-0 python3.9[162359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:21 compute-0 sudo[162357]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:22 compute-0 sudo[162435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mftrtdlegdkakfijtgswtpwrddzicsrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421141.3719811-828-96766401093093/AnsiballZ_file.py'
Jan 26 09:52:22 compute-0 sudo[162435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:22 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:22 compute-0 python3.9[162437]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:22 compute-0 sudo[162435]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:22 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:22 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Jan 26 09:52:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:22 compute-0 sudo[162589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggaehkxeufgifsvlfjfvhszjeaaoixfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421142.614971-864-166078268481208/AnsiballZ_stat.py'
Jan 26 09:52:22 compute-0 sudo[162589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:22.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:23 compute-0 python3.9[162591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:23 compute-0 sudo[162589]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:23 compute-0 sudo[162667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fponpkmkteorozgjmrxmwlmpsbctusto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421142.614971-864-166078268481208/AnsiballZ_file.py'
Jan 26 09:52:23 compute-0 sudo[162667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:23 compute-0 python3.9[162669]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:23 compute-0 sudo[162667]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:23 compute-0 ceph-mon[74456]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Jan 26 09:52:24 compute-0 sudo[162819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzpnhfdgyuiyfnwoyuufpmpimmxbcqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421143.884348-900-247732926409152/AnsiballZ_systemd.py'
Jan 26 09:52:24 compute-0 sudo[162819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:24 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:24 compute-0 python3.9[162821]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:52:24 compute-0 systemd[1]: Reloading.
Jan 26 09:52:24 compute-0 systemd-rc-local-generator[162846]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:52:24 compute-0 systemd-sysv-generator[162850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:52:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:24 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:24 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:24 compute-0 systemd[1]: Starting Create netns directory...
Jan 26 09:52:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:24 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 09:52:24 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 09:52:24 compute-0 systemd[1]: Finished Create netns directory.
Jan 26 09:52:24 compute-0 sudo[162819]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:24.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:25 compute-0 ceph-mon[74456]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:26 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:26 compute-0 sudo[163016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryahwwdxpisaodzqqmwmdjuhzkprslsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421146.2387912-930-230007013236892/AnsiballZ_file.py'
Jan 26 09:52:26 compute-0 sudo[163016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:26] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:52:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:26] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:52:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:26 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:26 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1248004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:26 compute-0 python3.9[163018]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:26 compute-0 sudo[163016]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:26.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:52:26.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:52:27 compute-0 sudo[163168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rceazaquaofiqysztabbbjsvydgedmxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421147.0209155-954-203749037493102/AnsiballZ_stat.py'
Jan 26 09:52:27 compute-0 sudo[163168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:27 compute-0 python3.9[163170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:27 compute-0 sudo[163168]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:27 compute-0 ceph-mon[74456]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:27 compute-0 sudo[163291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ickvlyeyrexdhbqnsczcrxflsbtdarid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421147.0209155-954-203749037493102/AnsiballZ_copy.py'
Jan 26 09:52:27 compute-0 sudo[163291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:28 compute-0 python3.9[163293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421147.0209155-954-203749037493102/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:28 compute-0 sudo[163291]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:28 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:28 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:28 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f124c000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:28.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:29 compute-0 sudo[163447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjzbxrfpoxhyhlgqogfqfevciifalwgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421148.7422624-1005-79220630654870/AnsiballZ_file.py'
Jan 26 09:52:29 compute-0 sudo[163447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:29 compute-0 python3.9[163449]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:29 compute-0 sudo[163447]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:29 compute-0 sudo[163599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpqenxlongiodicttjbvndgouqbeykfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421149.5607545-1029-133702407436434/AnsiballZ_file.py'
Jan 26 09:52:29 compute-0 sudo[163599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:29 compute-0 ceph-mon[74456]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:30 compute-0 python3.9[163601]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:52:30 compute-0 sudo[163599]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:30 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:30 compute-0 sudo[163753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvfnkrexoigjszkgntulbxyicfmjzgce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421150.40028-1053-87259037263309/AnsiballZ_stat.py'
Jan 26 09:52:30 compute-0 sudo[163753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:30 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:30 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:30 compute-0 python3.9[163755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:30 compute-0 sudo[163753]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:30.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:30.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:31 compute-0 sudo[163876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfdgqzaueajabonearjdfnsfdvmlbrbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421150.40028-1053-87259037263309/AnsiballZ_copy.py'
Jan 26 09:52:31 compute-0 sudo[163876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:31 compute-0 python3.9[163878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421150.40028-1053-87259037263309/.source.json _original_basename=.50w9cwup follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:31 compute-0 sudo[163876]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:31 compute-0 ceph-mon[74456]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 09:52:32 compute-0 python3.9[164028]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:32 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1250003db0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:32 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f125c004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:32 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 26 09:52:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:32.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:52:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:33 compute-0 ceph-mon[74456]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 26 09:52:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:34 compute-0 kernel: ganesha.nfsd[158234]: segfault at 50 ip 00007f12f6e7432e sp 00007f127dffa210 error 4 in libntirpc.so.5.8[7f12f6e59000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 26 09:52:34 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:52:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[157076]: 26/01/2026 09:52:34 : epoch 6977392e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f126c0025c0 fd 42 proxy ignored for local
Jan 26 09:52:34 compute-0 systemd[1]: Started Process Core Dump (PID 164377/UID 0).
Jan 26 09:52:34 compute-0 sudo[164455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mphjezmescxqygflllondbcgkcfemhgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421154.3277724-1173-107671256672865/AnsiballZ_container_config_data.py'
Jan 26 09:52:34 compute-0 sudo[164455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 33 op/s
Jan 26 09:52:34 compute-0 python3.9[164457]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 26 09:52:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:34.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:34 compute-0 sudo[164455]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:34.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:35 compute-0 systemd-coredump[164381]: Process 157083 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f12f6e7432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:52:35 compute-0 systemd[1]: systemd-coredump@4-164377-0.service: Deactivated successfully.
Jan 26 09:52:35 compute-0 systemd[1]: systemd-coredump@4-164377-0.service: Consumed 1.017s CPU time.
Jan 26 09:52:35 compute-0 podman[164500]: 2026-01-26 09:52:35.516793539 +0000 UTC m=+0.024125785 container died 642f16de31d67c9f41ad4718d33929158349fb8d206bb84571a9ac851b212557 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:52:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f5759c5fda7426fee3583209ebdf40d78e6906c490a2e4800c31c728b544034-merged.mount: Deactivated successfully.
Jan 26 09:52:35 compute-0 podman[164500]: 2026-01-26 09:52:35.554725976 +0000 UTC m=+0.062058222 container remove 642f16de31d67c9f41ad4718d33929158349fb8d206bb84571a9ac851b212557 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:52:35 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:52:35 compute-0 podman[164491]: 2026-01-26 09:52:35.571989254 +0000 UTC m=+0.079955218 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 09:52:35 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:52:35 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.321s CPU time.
Jan 26 09:52:35 compute-0 sudo[164681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlvffssagtvymaynhwicebzxzsbkbuci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421155.5028045-1206-239593528565900/AnsiballZ_container_config_hash.py'
Jan 26 09:52:35 compute-0 sudo[164681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:35 compute-0 ceph-mon[74456]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 33 op/s
Jan 26 09:52:36 compute-0 python3.9[164683]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 09:52:36 compute-0 sudo[164681]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:36] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:52:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:36] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:52:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:36.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:52:36.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:52:37 compute-0 sshd-session[164710]: Invalid user test from 157.245.76.178 port 46832
Jan 26 09:52:37 compute-0 sudo[164810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:52:37 compute-0 sudo[164810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:37 compute-0 sudo[164810]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:37 compute-0 sshd-session[164710]: Connection closed by invalid user test 157.245.76.178 port 46832 [preauth]
Jan 26 09:52:37 compute-0 sudo[164862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acticptuooxiqpdrieuuzzdcupqgajcm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769421156.6237602-1236-234399991448397/AnsiballZ_edpm_container_manage.py'
Jan 26 09:52:37 compute-0 sudo[164862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:37 compute-0 python3[164864]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 09:52:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:38 compute-0 ceph-mon[74456]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:38.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:39 compute-0 sudo[164928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:52:39 compute-0 sudo[164928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:39 compute-0 sudo[164928]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:40 compute-0 sudo[164953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:52:40 compute-0 sudo[164953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:40 compute-0 ceph-mon[74456]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095240 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:52:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:40.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:40.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:42 compute-0 ceph-mon[74456]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:42.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:42.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:44 compute-0 sudo[164953]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:44 compute-0 ceph-mon[74456]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:52:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:52:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:44.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:52:44 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:52:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:52:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:52:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:52:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:44.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:45 compute-0 podman[164876]: 2026-01-26 09:52:45.125600749 +0000 UTC m=+7.530911654 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 09:52:45 compute-0 podman[165089]: 2026-01-26 09:52:45.312107876 +0000 UTC m=+0.095268569 container create 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 09:52:45 compute-0 podman[165089]: 2026-01-26 09:52:45.245069545 +0000 UTC m=+0.028230228 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 09:52:45 compute-0 python3[164864]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 09:52:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:52:45 compute-0 sudo[164862]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:52:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:52:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:52:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:52:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:52:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:52:45 compute-0 sudo[165146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:52:45 compute-0 sudo[165146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:45 compute-0 sudo[165146]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:45 compute-0 sudo[165177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:52:45 compute-0 sudo[165177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:45 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 5.
Jan 26 09:52:45 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:52:45 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.321s CPU time.
Jan 26 09:52:45 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:52:45 compute-0 ceph-mon[74456]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:52:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:52:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:52:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:52:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:52:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:52:46 compute-0 podman[165272]: 2026-01-26 09:52:46.009143997 +0000 UTC m=+0.041299584 container create 319680f311a6bff548a31262b5a1ee997ea75b437bea73f9338929472e4b9256 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a509f8521ed7fd639d7ee7e093abad1c6697bc51383586744c85cd90bf77f7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a509f8521ed7fd639d7ee7e093abad1c6697bc51383586744c85cd90bf77f7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a509f8521ed7fd639d7ee7e093abad1c6697bc51383586744c85cd90bf77f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a509f8521ed7fd639d7ee7e093abad1c6697bc51383586744c85cd90bf77f7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 podman[165272]: 2026-01-26 09:52:45.989427691 +0000 UTC m=+0.021583278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:52:46 compute-0 podman[165272]: 2026-01-26 09:52:46.091393582 +0000 UTC m=+0.123549199 container init 319680f311a6bff548a31262b5a1ee997ea75b437bea73f9338929472e4b9256 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:52:46 compute-0 podman[165272]: 2026-01-26 09:52:46.096415478 +0000 UTC m=+0.128571055 container start 319680f311a6bff548a31262b5a1ee997ea75b437bea73f9338929472e4b9256 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:52:46 compute-0 bash[165272]: 319680f311a6bff548a31262b5a1ee997ea75b437bea73f9338929472e4b9256
Jan 26 09:52:46 compute-0 podman[165301]: 2026-01-26 09:52:46.10496328 +0000 UTC m=+0.044866610 container create 957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:52:46 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:52:46 compute-0 systemd[1]: Started libpod-conmon-957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec.scope.
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:52:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:52:46 compute-0 podman[165301]: 2026-01-26 09:52:46.177776579 +0000 UTC m=+0.117679919 container init 957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:52:46 compute-0 podman[165301]: 2026-01-26 09:52:46.084296468 +0000 UTC m=+0.024199818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:52:46 compute-0 podman[165301]: 2026-01-26 09:52:46.18404846 +0000 UTC m=+0.123951790 container start 957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:52:46 compute-0 podman[165301]: 2026-01-26 09:52:46.186821835 +0000 UTC m=+0.126725165 container attach 957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:52:46 compute-0 amazing_buck[165337]: 167 167
Jan 26 09:52:46 compute-0 systemd[1]: libpod-957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec.scope: Deactivated successfully.
Jan 26 09:52:46 compute-0 podman[165301]: 2026-01-26 09:52:46.189320582 +0000 UTC m=+0.129223912 container died 957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 09:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e45d48ed264cbcde20ad57dea2cfbc87c22b48e80edc114b78a8d3929235ad9-merged.mount: Deactivated successfully.
Jan 26 09:52:46 compute-0 podman[165301]: 2026-01-26 09:52:46.223982854 +0000 UTC m=+0.163886184 container remove 957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_buck, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:52:46 compute-0 systemd[1]: libpod-conmon-957fc8f20b7e5aba0d25774033c84a31eb601ebf42de8f845c9319e2597fe3ec.scope: Deactivated successfully.
Jan 26 09:52:46 compute-0 podman[165384]: 2026-01-26 09:52:46.463957605 +0000 UTC m=+0.092035751 container create 1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:52:46 compute-0 podman[165384]: 2026-01-26 09:52:46.411666534 +0000 UTC m=+0.039744730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:52:46 compute-0 systemd[1]: Started libpod-conmon-1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a.scope.
Jan 26 09:52:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f06e85ccaa8144fa5083f50070e69234d9399defbc9bd9e4e6dc0a74b89ff6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f06e85ccaa8144fa5083f50070e69234d9399defbc9bd9e4e6dc0a74b89ff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f06e85ccaa8144fa5083f50070e69234d9399defbc9bd9e4e6dc0a74b89ff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f06e85ccaa8144fa5083f50070e69234d9399defbc9bd9e4e6dc0a74b89ff6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f06e85ccaa8144fa5083f50070e69234d9399defbc9bd9e4e6dc0a74b89ff6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:46 compute-0 podman[165384]: 2026-01-26 09:52:46.609251343 +0000 UTC m=+0.237329549 container init 1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bose, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:52:46 compute-0 podman[165384]: 2026-01-26 09:52:46.623091169 +0000 UTC m=+0.251169345 container start 1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:52:46 compute-0 podman[165384]: 2026-01-26 09:52:46.628004883 +0000 UTC m=+0.256083059 container attach 1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:52:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:46] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:52:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:46] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:52:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:52:46 compute-0 sudo[165534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiesvgpvskedgotewyfnofzngtdtbktv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421166.5636835-1260-178364307948872/AnsiballZ_stat.py'
Jan 26 09:52:46 compute-0 sudo[165534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:46.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:46 compute-0 distracted_bose[165420]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:52:46 compute-0 distracted_bose[165420]: --> All data devices are unavailable
Jan 26 09:52:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:52:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:46.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:52:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:52:46.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:52:47 compute-0 systemd[1]: libpod-1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a.scope: Deactivated successfully.
Jan 26 09:52:47 compute-0 podman[165384]: 2026-01-26 09:52:47.0146805 +0000 UTC m=+0.642758686 container died 1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3f06e85ccaa8144fa5083f50070e69234d9399defbc9bd9e4e6dc0a74b89ff6-merged.mount: Deactivated successfully.
Jan 26 09:52:47 compute-0 podman[165384]: 2026-01-26 09:52:47.070927678 +0000 UTC m=+0.699005824 container remove 1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:52:47 compute-0 systemd[1]: libpod-conmon-1e5b019adccc96bd7b54025855b31d310f673dcc560bdc4a079d1dc3de92331a.scope: Deactivated successfully.
Jan 26 09:52:47 compute-0 sudo[165177]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:47 compute-0 python3.9[165538]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:52:47 compute-0 sudo[165555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:52:47 compute-0 sudo[165555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:47 compute-0 sudo[165555]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:47 compute-0 sudo[165534]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:47 compute-0 sudo[165582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:52:47 compute-0 sudo[165582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:47 compute-0 podman[165749]: 2026-01-26 09:52:47.690505624 +0000 UTC m=+0.046694700 container create c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_burnell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:52:47 compute-0 systemd[1]: Started libpod-conmon-c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b.scope.
Jan 26 09:52:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:52:47 compute-0 podman[165749]: 2026-01-26 09:52:47.670326255 +0000 UTC m=+0.026515381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:52:47 compute-0 podman[165749]: 2026-01-26 09:52:47.767938438 +0000 UTC m=+0.124127554 container init c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:52:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:47 compute-0 podman[165749]: 2026-01-26 09:52:47.776144421 +0000 UTC m=+0.132333507 container start c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:52:47 compute-0 wonderful_burnell[165790]: 167 167
Jan 26 09:52:47 compute-0 systemd[1]: libpod-c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b.scope: Deactivated successfully.
Jan 26 09:52:47 compute-0 sudo[165819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjixduwgyyjjpgfckwcjypzrxhywikze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421167.4851692-1287-232650889138823/AnsiballZ_file.py'
Jan 26 09:52:47 compute-0 sudo[165819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:47 compute-0 podman[165749]: 2026-01-26 09:52:47.867016041 +0000 UTC m=+0.223205157 container attach c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 09:52:47 compute-0 podman[165749]: 2026-01-26 09:52:47.86812034 +0000 UTC m=+0.224309456 container died c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_burnell, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:52:47 compute-0 ceph-mon[74456]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c76ceddc3e067e95025fcba04f1ba581847b39dd10555c424015b43863b59ec-merged.mount: Deactivated successfully.
Jan 26 09:52:47 compute-0 podman[165749]: 2026-01-26 09:52:47.920462893 +0000 UTC m=+0.276651969 container remove c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_burnell, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:52:47 compute-0 systemd[1]: libpod-conmon-c8e87ae24b4336f5be95b17d3e6ad59462d69bb4a0f0a37568c9cf48d6a2c15b.scope: Deactivated successfully.
Jan 26 09:52:47 compute-0 python3.9[165824]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:47 compute-0 sudo[165819]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:48 compute-0 podman[165858]: 2026-01-26 09:52:48.102590021 +0000 UTC m=+0.040561573 container create 9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 09:52:48 compute-0 systemd[1]: Started libpod-conmon-9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903.scope.
Jan 26 09:52:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3398285f57d3b4cd5acb202979bc5a4da0e75cad958ccce68a7d9b836ab7b237/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3398285f57d3b4cd5acb202979bc5a4da0e75cad958ccce68a7d9b836ab7b237/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3398285f57d3b4cd5acb202979bc5a4da0e75cad958ccce68a7d9b836ab7b237/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3398285f57d3b4cd5acb202979bc5a4da0e75cad958ccce68a7d9b836ab7b237/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:48 compute-0 podman[165858]: 2026-01-26 09:52:48.178398561 +0000 UTC m=+0.116370133 container init 9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:52:48 compute-0 podman[165858]: 2026-01-26 09:52:48.083894474 +0000 UTC m=+0.021866036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:52:48 compute-0 podman[165858]: 2026-01-26 09:52:48.185203206 +0000 UTC m=+0.123174758 container start 9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:52:48 compute-0 podman[165858]: 2026-01-26 09:52:48.18865491 +0000 UTC m=+0.126626462 container attach 9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:52:48 compute-0 sudo[165939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfdaossekqyblvaifohhurydvkczytdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421167.4851692-1287-232650889138823/AnsiballZ_stat.py'
Jan 26 09:52:48 compute-0 sudo[165939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:48 compute-0 python3.9[165941]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]: {
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:     "0": [
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:         {
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "devices": [
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "/dev/loop3"
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             ],
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "lv_name": "ceph_lv0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "lv_size": "21470642176",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "name": "ceph_lv0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "tags": {
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.cluster_name": "ceph",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.crush_device_class": "",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.encrypted": "0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.osd_id": "0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.type": "block",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.vdo": "0",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:                 "ceph.with_tpm": "0"
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             },
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "type": "block",
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:             "vg_name": "ceph_vg0"
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:         }
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]:     ]
Jan 26 09:52:48 compute-0 adoring_aryabhata[165885]: }
Jan 26 09:52:48 compute-0 sudo[165939]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:48 compute-0 systemd[1]: libpod-9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903.scope: Deactivated successfully.
Jan 26 09:52:48 compute-0 podman[165858]: 2026-01-26 09:52:48.443107624 +0000 UTC m=+0.381079176 container died 9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 09:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3398285f57d3b4cd5acb202979bc5a4da0e75cad958ccce68a7d9b836ab7b237-merged.mount: Deactivated successfully.
Jan 26 09:52:48 compute-0 podman[165858]: 2026-01-26 09:52:48.537633043 +0000 UTC m=+0.475604635 container remove 9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 09:52:48 compute-0 systemd[1]: libpod-conmon-9563f69eb2cc9379d0356bf2ab474ac2d98344b5daf8d3a9f56fde7ae3f5e903.scope: Deactivated successfully.
Jan 26 09:52:48 compute-0 sudo[165582]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:48 compute-0 sudo[166012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:52:48 compute-0 sudo[166012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:48 compute-0 sudo[166012]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:48 compute-0 sudo[166037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:52:48 compute-0 sudo[166037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:52:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:52:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:52:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:52:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:52:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:52:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:52:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:52:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:48.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:52:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:48.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:48 compute-0 sudo[166197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfuoefkpiyfovxaxwqesbeihwfxnqozb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421168.4988408-1287-281467892828898/AnsiballZ_copy.py'
Jan 26 09:52:49 compute-0 sudo[166197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:49 compute-0 podman[166201]: 2026-01-26 09:52:49.0851346 +0000 UTC m=+0.062635743 container create c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hofstadter, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:52:49 compute-0 systemd[1]: Started libpod-conmon-c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a.scope.
Jan 26 09:52:49 compute-0 podman[166201]: 2026-01-26 09:52:49.043571981 +0000 UTC m=+0.021073144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:52:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:52:49 compute-0 podman[166201]: 2026-01-26 09:52:49.166153522 +0000 UTC m=+0.143654685 container init c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hofstadter, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:52:49 compute-0 podman[166201]: 2026-01-26 09:52:49.173416659 +0000 UTC m=+0.150917802 container start c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hofstadter, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:52:49 compute-0 podman[166201]: 2026-01-26 09:52:49.176599045 +0000 UTC m=+0.154100188 container attach c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:52:49 compute-0 vigorous_hofstadter[166217]: 167 167
Jan 26 09:52:49 compute-0 systemd[1]: libpod-c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a.scope: Deactivated successfully.
Jan 26 09:52:49 compute-0 podman[166201]: 2026-01-26 09:52:49.179398531 +0000 UTC m=+0.156899694 container died c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hofstadter, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 09:52:49 compute-0 python3.9[166200]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769421168.4988408-1287-281467892828898/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:49 compute-0 sudo[166197]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f25e32c851838ea8f2c570ed1df9ef7409bf3ebf657e47a6ec1fc8f0e71c4f5-merged.mount: Deactivated successfully.
Jan 26 09:52:49 compute-0 podman[166201]: 2026-01-26 09:52:49.228258119 +0000 UTC m=+0.205759262 container remove c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 09:52:49 compute-0 systemd[1]: libpod-conmon-c22b149a4a8f58086266dce705a94d3d9a7e2976df24375d526fc7fac097349a.scope: Deactivated successfully.
Jan 26 09:52:49 compute-0 podman[166282]: 2026-01-26 09:52:49.382165201 +0000 UTC m=+0.042131956 container create 9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cray, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:52:49 compute-0 systemd[1]: Started libpod-conmon-9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270.scope.
Jan 26 09:52:49 compute-0 sudo[166327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdunleztarfyztsggtfjiuazvegfncw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421168.4988408-1287-281467892828898/AnsiballZ_systemd.py'
Jan 26 09:52:49 compute-0 sudo[166327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d95fb4402eaa88bc4b46868fd4d3e65d8dab255b895311783526797ca6d2c9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d95fb4402eaa88bc4b46868fd4d3e65d8dab255b895311783526797ca6d2c9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d95fb4402eaa88bc4b46868fd4d3e65d8dab255b895311783526797ca6d2c9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d95fb4402eaa88bc4b46868fd4d3e65d8dab255b895311783526797ca6d2c9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:49 compute-0 podman[166282]: 2026-01-26 09:52:49.459357889 +0000 UTC m=+0.119324664 container init 9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:52:49 compute-0 podman[166282]: 2026-01-26 09:52:49.365230321 +0000 UTC m=+0.025197076 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:52:49 compute-0 podman[166282]: 2026-01-26 09:52:49.46602228 +0000 UTC m=+0.125989035 container start 9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cray, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:52:49 compute-0 podman[166282]: 2026-01-26 09:52:49.470794219 +0000 UTC m=+0.130760974 container attach 9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:52:49 compute-0 python3.9[166333]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 09:52:49 compute-0 systemd[1]: Reloading.
Jan 26 09:52:50 compute-0 systemd-sysv-generator[166384]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:52:50 compute-0 systemd-rc-local-generator[166380]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:52:50 compute-0 ceph-mon[74456]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:52:50 compute-0 sudo[166327]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:50 compute-0 lvm[166458]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:52:50 compute-0 lvm[166458]: VG ceph_vg0 finished
Jan 26 09:52:50 compute-0 friendly_cray[166331]: {}
Jan 26 09:52:50 compute-0 systemd[1]: libpod-9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270.scope: Deactivated successfully.
Jan 26 09:52:50 compute-0 systemd[1]: libpod-9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270.scope: Consumed 2.252s CPU time.
Jan 26 09:52:50 compute-0 podman[166282]: 2026-01-26 09:52:50.752873367 +0000 UTC m=+1.412840162 container died 9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 09:52:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:52:50 compute-0 sudo[166533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzewnwvpafvbntskmyjhaamsxsliswit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421168.4988408-1287-281467892828898/AnsiballZ_systemd.py'
Jan 26 09:52:50 compute-0 sudo[166533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:50.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:50.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:52 compute-0 ceph-mon[74456]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d95fb4402eaa88bc4b46868fd4d3e65d8dab255b895311783526797ca6d2c9a-merged.mount: Deactivated successfully.
Jan 26 09:52:52 compute-0 podman[166282]: 2026-01-26 09:52:52.062840342 +0000 UTC m=+2.722807127 container remove 9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 09:52:52 compute-0 systemd[1]: libpod-conmon-9a139b0b549dee0f532b0afc655f1a66829ccdf46fa8d7e2ac1d73558a52e270.scope: Deactivated successfully.
Jan 26 09:52:52 compute-0 sudo[166037]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:52:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:52:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:52 compute-0 sudo[166537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:52:52 compute-0 sudo[166537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:52 compute-0 sudo[166537]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:52 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:52:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:52 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:52:52 compute-0 python3.9[166535]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:52:52 compute-0 systemd[1]: Reloading.
Jan 26 09:52:52 compute-0 systemd-rc-local-generator[166593]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:52:52 compute-0 systemd-sysv-generator[166597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:52:52 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 26 09:52:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d71bdea6c468f78b75e88469bd5f6e566b7665c1b2e48530bd6364be9982c10/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d71bdea6c468f78b75e88469bd5f6e566b7665c1b2e48530bd6364be9982c10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 09:52:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f.
Jan 26 09:52:52 compute-0 podman[166605]: 2026-01-26 09:52:52.718640263 +0000 UTC m=+0.112982102 container init 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + sudo -E kolla_set_configs
Jan 26 09:52:52 compute-0 podman[166605]: 2026-01-26 09:52:52.747602969 +0000 UTC m=+0.141944838 container start 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 09:52:52 compute-0 edpm-start-podman-container[166605]: ovn_metadata_agent
Jan 26 09:52:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:52:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Validating config file
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Copying service configuration files
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Writing out command to execute
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: ++ cat /run_command
Jan 26 09:52:52 compute-0 edpm-start-podman-container[166604]: Creating additional drop-in dependency for "ovn_metadata_agent" (8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f)
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + CMD=neutron-ovn-metadata-agent
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + ARGS=
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + sudo kolla_copy_cacerts
Jan 26 09:52:52 compute-0 podman[166626]: 2026-01-26 09:52:52.842031696 +0000 UTC m=+0.084433615 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + [[ ! -n '' ]]
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + . kolla_extend_start
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: Running command: 'neutron-ovn-metadata-agent'
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + umask 0022
Jan 26 09:52:52 compute-0 ovn_metadata_agent[166620]: + exec neutron-ovn-metadata-agent
Jan 26 09:52:52 compute-0 systemd[1]: Reloading.
Jan 26 09:52:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:52.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:52 compute-0 systemd-sysv-generator[166701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:52:52 compute-0 systemd-rc-local-generator[166697]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:52:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:52:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:52.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:52:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:52:53 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 26 09:52:53 compute-0 sudo[166533]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:54 compute-0 python3.9[166858]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 09:52:54 compute-0 ceph-mon[74456]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.633 166625 INFO neutron.common.config [-] Logging enabled!
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.634 166625 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.634 166625 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.634 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.634 166625 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.634 166625 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.634 166625 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.635 166625 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.636 166625 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.637 166625 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.638 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.639 166625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.640 166625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.641 166625 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.642 166625 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.643 166625 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.644 166625 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.644 166625 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.644 166625 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.644 166625 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.644 166625 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.645 166625 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.645 166625 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.645 166625 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.645 166625 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.645 166625 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.645 166625 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.646 166625 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.646 166625 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.646 166625 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.646 166625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.646 166625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.647 166625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.647 166625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.647 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.647 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.647 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.648 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.648 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.648 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.648 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.649 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.649 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.649 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.649 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.649 166625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.650 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.650 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.650 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.650 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.650 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.651 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.651 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.651 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.651 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.651 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.652 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.652 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.652 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.652 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.652 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.652 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.652 166625 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.653 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.654 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.655 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.656 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.657 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.658 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.659 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.660 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.661 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.662 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.663 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.664 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.665 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.666 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.667 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.668 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.669 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.670 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.671 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.671 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.671 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.671 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.671 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.671 166625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.671 166625 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.679 166625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.680 166625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.680 166625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.680 166625 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.680 166625 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.694 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name f90cdfa2-81a1-408b-861e-9121944637ea (UUID: f90cdfa2-81a1-408b-861e-9121944637ea) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.724 166625 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.724 166625 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.724 166625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.724 166625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.727 166625 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.734 166625 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.742 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'f90cdfa2-81a1-408b-861e-9121944637ea'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], external_ids={}, name=f90cdfa2-81a1-408b-861e-9121944637ea, nb_cfg_timestamp=1769421103510, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.743 166625 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb847c39bb0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.743 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.744 166625 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.744 166625 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.744 166625 INFO oslo_service.service [-] Starting 1 workers
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.748 166625 DEBUG oslo_service.service [-] Started child 166908 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.751 166625 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp3jrr8h39/privsep.sock']
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.752 166908 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-170643'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 26 09:52:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.781 166908 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.781 166908 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.781 166908 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.786 166908 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.792 166908 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 26 09:52:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:54.799 166908 INFO eventlet.wsgi.server [-] (166908) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 26 09:52:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:54.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:55 compute-0 sudo[167015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmraquektsoeofgndppmerayjdxgkpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421174.7531822-1422-68095363460813/AnsiballZ_stat.py'
Jan 26 09:52:55 compute-0 sudo[167015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:55 compute-0 python3.9[167017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:52:55 compute-0 sudo[167015]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:55 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.419 166625 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.420 166625 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3jrr8h39/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.292 167020 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.296 167020 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.299 167020 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.299 167020 INFO oslo.privsep.daemon [-] privsep daemon running as pid 167020
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.424 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[e5240fe3-3d7c-44d5-b3ac-88db9497c38f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 09:52:55 compute-0 sudo[167145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tidddfqgylvbggsppdjewazftzdriapg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421174.7531822-1422-68095363460813/AnsiballZ_copy.py'
Jan 26 09:52:55 compute-0 sudo[167145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:52:55 compute-0 python3.9[167147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421174.7531822-1422-68095363460813/.source.yaml _original_basename=.fjb2orvy follow=False checksum=e4cba382ee426a679e5ef46b4fc246a694e7130c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.930 167020 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.931 167020 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:52:55 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:55.931 167020 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:52:55 compute-0 sudo[167145]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:56 compute-0 ceph-mon[74456]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:52:56 compute-0 sshd-session[157285]: Connection closed by 192.168.122.30 port 59692
Jan 26 09:52:56 compute-0 sshd-session[157282]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:52:56 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 26 09:52:56 compute-0 systemd[1]: session-52.scope: Consumed 57.156s CPU time.
Jan 26 09:52:56 compute-0 systemd-logind[787]: Session 52 logged out. Waiting for processes to exit.
Jan 26 09:52:56 compute-0 systemd-logind[787]: Removed session 52.
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.464 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[5909ef3d-94d3-477f-b718-3ac4f3a9e686]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.466 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, column=external_ids, values=({'neutron:ovn-metadata-id': 'b7d18c97-1cb4-5853-be43-0272f4edfcbf'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.475 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.487 166625 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.487 166625 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.487 166625 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.487 166625 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.487 166625 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.488 166625 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.488 166625 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.488 166625 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.488 166625 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.489 166625 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.489 166625 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.489 166625 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.490 166625 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.490 166625 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.490 166625 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.491 166625 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.491 166625 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.491 166625 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.491 166625 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.491 166625 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.492 166625 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.492 166625 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.492 166625 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.493 166625 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.493 166625 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.493 166625 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.494 166625 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.494 166625 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.494 166625 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.494 166625 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.495 166625 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.495 166625 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.495 166625 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.495 166625 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.496 166625 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.496 166625 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.496 166625 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.497 166625 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.497 166625 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.497 166625 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.497 166625 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.498 166625 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.498 166625 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.498 166625 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.498 166625 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.498 166625 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.499 166625 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.499 166625 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.499 166625 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.499 166625 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.500 166625 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.500 166625 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.500 166625 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.500 166625 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.500 166625 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.501 166625 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.501 166625 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.501 166625 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.501 166625 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.502 166625 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.502 166625 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.502 166625 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.503 166625 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.503 166625 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.503 166625 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.503 166625 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.504 166625 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.504 166625 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.504 166625 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.504 166625 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.505 166625 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.505 166625 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.505 166625 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.506 166625 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.506 166625 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.506 166625 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.506 166625 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.507 166625 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.507 166625 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.507 166625 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.507 166625 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.507 166625 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.508 166625 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.508 166625 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.508 166625 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.508 166625 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.509 166625 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.509 166625 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.509 166625 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.509 166625 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.510 166625 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.510 166625 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.510 166625 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.511 166625 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.511 166625 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.511 166625 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.512 166625 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.512 166625 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.512 166625 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.512 166625 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.513 166625 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.513 166625 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.513 166625 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.513 166625 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.513 166625 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.514 166625 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.514 166625 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.514 166625 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.514 166625 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.515 166625 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.515 166625 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.515 166625 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.515 166625 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.516 166625 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.516 166625 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.516 166625 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.517 166625 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.517 166625 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.517 166625 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.518 166625 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.518 166625 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.518 166625 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.519 166625 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.519 166625 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.519 166625 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.519 166625 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.520 166625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.520 166625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.520 166625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.520 166625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.521 166625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.521 166625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.521 166625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.522 166625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.522 166625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.522 166625 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.522 166625 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.522 166625 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.523 166625 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.523 166625 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.523 166625 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.523 166625 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.524 166625 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.524 166625 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.524 166625 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.524 166625 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.524 166625 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.524 166625 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.524 166625 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.525 166625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.525 166625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.525 166625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.525 166625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.525 166625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.526 166625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.526 166625 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.526 166625 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.526 166625 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.526 166625 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.526 166625 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.527 166625 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.527 166625 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.527 166625 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.527 166625 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.527 166625 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.527 166625 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.527 166625 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.528 166625 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.528 166625 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.528 166625 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.528 166625 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.528 166625 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.528 166625 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.529 166625 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.529 166625 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.529 166625 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.529 166625 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.529 166625 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.529 166625 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.530 166625 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.530 166625 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.530 166625 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.530 166625 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.530 166625 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.530 166625 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.531 166625 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.531 166625 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.531 166625 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.531 166625 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.531 166625 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.531 166625 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.531 166625 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.532 166625 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.532 166625 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.532 166625 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.532 166625 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.532 166625 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.532 166625 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.533 166625 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.533 166625 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.533 166625 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.533 166625 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.533 166625 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.533 166625 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.533 166625 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.534 166625 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.534 166625 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.534 166625 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.534 166625 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.534 166625 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.534 166625 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.534 166625 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.535 166625 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.535 166625 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.535 166625 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.535 166625 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.535 166625 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.535 166625 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.536 166625 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.536 166625 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.536 166625 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.536 166625 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.536 166625 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.536 166625 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.536 166625 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.537 166625 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.537 166625 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.537 166625 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.537 166625 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.537 166625 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.537 166625 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.537 166625 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.538 166625 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.538 166625 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.538 166625 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.538 166625 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.538 166625 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.539 166625 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.539 166625 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.539 166625 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.539 166625 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.539 166625 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.539 166625 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.539 166625 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.540 166625 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.540 166625 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.540 166625 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.540 166625 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.540 166625 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.540 166625 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.540 166625 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.541 166625 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.541 166625 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.541 166625 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.541 166625 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.541 166625 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.541 166625 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.542 166625 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.542 166625 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.542 166625 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.542 166625 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.542 166625 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.542 166625 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.543 166625 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.543 166625 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.543 166625 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.543 166625 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.543 166625 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.543 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.543 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.544 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.544 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.544 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.544 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.544 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.544 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.545 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.545 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.545 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.545 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.545 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.545 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.546 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.546 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.546 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.546 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.546 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.546 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.546 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.547 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.547 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.547 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.547 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.547 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.547 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.548 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.548 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.548 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.548 166625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.548 166625 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.549 166625 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.549 166625 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.549 166625 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 09:52:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:52:56.549 166625 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 09:52:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:56] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:52:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:52:56] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:52:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:52:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:52:57.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:52:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:57 compute-0 sudo[167174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:52:57 compute-0 sudo[167174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:52:57 compute-0 sudo[167174]: pam_unix(sudo:session): session closed for user root
Jan 26 09:52:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:52:58 compute-0 ceph-mon[74456]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e54000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c000da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:52:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c000da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:52:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:52:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:52:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:52:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:52:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:52:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:52:59.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:00 compute-0 ceph-mon[74456]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:53:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095300 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:53:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:00 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:00 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:00 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1023 B/s wr, 63 op/s
Jan 26 09:53:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:00.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 26 09:53:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:01.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 26 09:53:01 compute-0 sshd-session[167219]: Accepted publickey for zuul from 192.168.122.30 port 44368 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:53:01 compute-0 systemd-logind[787]: New session 53 of user zuul.
Jan 26 09:53:01 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 26 09:53:01 compute-0 sshd-session[167219]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:53:02 compute-0 python3.9[167372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:53:02 compute-0 ceph-mon[74456]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1023 B/s wr, 63 op/s
Jan 26 09:53:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:02 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:02 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:02 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 426 B/s wr, 61 op/s
Jan 26 09:53:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:53:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:04 compute-0 sudo[167528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mawyqblqgssbbarpgajdrcwetqfrwdsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421183.6528485-57-124523240279559/AnsiballZ_command.py'
Jan 26 09:53:04 compute-0 sudo[167528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:04 compute-0 python3.9[167530]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:04 compute-0 ceph-mon[74456]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 426 B/s wr, 61 op/s
Jan 26 09:53:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:04 compute-0 sudo[167528]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:04 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:04 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:04 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 426 B/s wr, 61 op/s
Jan 26 09:53:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:04.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:05.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:05 compute-0 sudo[167695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwuunzyyumjvqijxwpxidjksbtmavnji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421184.762555-90-212015347206815/AnsiballZ_systemd_service.py'
Jan 26 09:53:05 compute-0 sudo[167695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:05 compute-0 python3.9[167697]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 09:53:05 compute-0 systemd[1]: Reloading.
Jan 26 09:53:05 compute-0 systemd-rc-local-generator[167738]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:53:05 compute-0 systemd-sysv-generator[167743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:53:05 compute-0 podman[167699]: 2026-01-26 09:53:05.864946404 +0000 UTC m=+0.110270577 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 09:53:06 compute-0 sudo[167695]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:06 compute-0 ceph-mon[74456]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 426 B/s wr, 61 op/s
Jan 26 09:53:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:06 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:06] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 26 09:53:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:06] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 26 09:53:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:06 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:06 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Jan 26 09:53:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:06.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:07.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:53:07 compute-0 python3.9[167909]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:53:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:07.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:07 compute-0 network[167926]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:53:07 compute-0 network[167927]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:53:07 compute-0 network[167928]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:53:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:08 compute-0 ceph-mon[74456]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Jan 26 09:53:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:08 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:08 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:08 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Jan 26 09:53:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:08.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:09.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:10 compute-0 ceph-mon[74456]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Jan 26 09:53:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:10 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:10 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:10 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Jan 26 09:53:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:12 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:12 compute-0 ceph-mon[74456]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 85 B/s wr, 60 op/s
Jan 26 09:53:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:12 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:12 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:14 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30002470 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:14 compute-0 ceph-mon[74456]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:14 compute-0 sudo[168196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljlqyzvireqqpdrlwyisxqyhjfsuongz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421194.2357523-147-177676588189242/AnsiballZ_systemd_service.py'
Jan 26 09:53:14 compute-0 sudo[168196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:14 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:14 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:14 compute-0 python3.9[168198]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:53:14 compute-0 sudo[168196]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:14.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:15.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:15 compute-0 sudo[168349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vebmejkexamlklpfenglcocklowbzgyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421195.0413017-147-138632949749692/AnsiballZ_systemd_service.py'
Jan 26 09:53:15 compute-0 sudo[168349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:15 compute-0 ceph-mon[74456]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:15 compute-0 python3.9[168351]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:53:15 compute-0 sudo[168349]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:16 compute-0 sudo[168502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmdlxsrinoogiglzwlmloausjqxbwzyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421195.9358678-147-255844360505216/AnsiballZ_systemd_service.py'
Jan 26 09:53:16 compute-0 sudo[168502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:16 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:16 compute-0 python3.9[168504]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:53:16 compute-0 sudo[168502]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:16] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 26 09:53:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:16] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 26 09:53:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:16 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:16 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:16.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:17.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:53:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:17.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:53:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:17.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:17 compute-0 sudo[168657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eglrcouqfrqjhwafesmksjtmkvehqkhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421196.804357-147-208486723591365/AnsiballZ_systemd_service.py'
Jan 26 09:53:17 compute-0 sudo[168657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:17 compute-0 sudo[168660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:53:17 compute-0 sudo[168660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:17 compute-0 sudo[168660]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:17 compute-0 python3.9[168659]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:53:17 compute-0 sudo[168657]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:17 compute-0 sshd-session[168683]: Invalid user test from 157.245.76.178 port 38682
Jan 26 09:53:18 compute-0 sshd-session[168683]: Connection closed by invalid user test 157.245.76.178 port 38682 [preauth]
Jan 26 09:53:18 compute-0 sudo[168837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqayqylckrzgdhpzejgiscofyzyhhuho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421197.7122276-147-120231136603342/AnsiballZ_systemd_service.py'
Jan 26 09:53:18 compute-0 sudo[168837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:18 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:18 compute-0 ceph-mon[74456]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:18 compute-0 python3.9[168839]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:53:18 compute-0 sudo[168837]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:53:18
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', '.nfs', 'volumes', '.rgw.root', 'default.rgw.control']
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:53:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:53:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:18 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:18 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:53:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:53:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:19.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:19 compute-0 sudo[168992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlodpatfocioitqswuzrafbiyvsxfdxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421198.6754372-147-13224377520489/AnsiballZ_systemd_service.py'
Jan 26 09:53:19 compute-0 sudo[168992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:19 compute-0 python3.9[168994]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:53:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:19 compute-0 sudo[168992]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:20 compute-0 sudo[169145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iraeokujshrjrfybpwvvndulezahpary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421199.7101183-147-115354901013065/AnsiballZ_systemd_service.py'
Jan 26 09:53:20 compute-0 sudo[169145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:20 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:20 compute-0 python3.9[169147]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:53:20 compute-0 ceph-mon[74456]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:20 compute-0 sudo[169145]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:20 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:20 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:21.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:21 compute-0 ceph-mon[74456]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:22 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:22 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:22 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:23.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:23 compute-0 podman[169177]: 2026-01-26 09:53:23.147629434 +0000 UTC m=+0.075914215 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 09:53:23 compute-0 ceph-mon[74456]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:23 compute-0 sudo[169322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugzkpezlmfgckizddeidkzrpnngiddqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421203.3946161-303-5126781421967/AnsiballZ_file.py'
Jan 26 09:53:23 compute-0 sudo[169322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:24 compute-0 python3.9[169324]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:24 compute-0 sudo[169322]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:24 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:24 compute-0 sudo[169476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysvemcjscwkyvvcjtgrlxtytqpgxlahv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421204.3453937-303-245125838787640/AnsiballZ_file.py'
Jan 26 09:53:24 compute-0 sudo[169476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:24 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:24 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:24 compute-0 python3.9[169478]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:24 compute-0 sudo[169476]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:25.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:25 compute-0 sudo[169628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeazpkbzkufqcyaiwhfiaskrqtxnqwvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421205.058011-303-91542819751/AnsiballZ_file.py'
Jan 26 09:53:25 compute-0 sudo[169628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:25 compute-0 python3.9[169630]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:25 compute-0 sudo[169628]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:25 compute-0 ceph-mon[74456]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:26 compute-0 sudo[169780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koyhxjnmvsprhieylpedjzyeadkpjcnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421205.8584204-303-131540779173278/AnsiballZ_file.py'
Jan 26 09:53:26 compute-0 sudo[169780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:26 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:26 compute-0 python3.9[169782]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:26 compute-0 sudo[169780]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:26] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Jan 26 09:53:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:26] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Jan 26 09:53:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:26 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:26 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:27.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:53:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:27.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:53:27 compute-0 sudo[169934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emcbqqdshfmlpwtaxzgevemakhtmbpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421206.6589293-303-80543201630695/AnsiballZ_file.py'
Jan 26 09:53:27 compute-0 sudo[169934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:27 compute-0 python3.9[169936]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:27 compute-0 sudo[169934]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:27 compute-0 sudo[170086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kswwgzzcyrzqqbdexoqoxyeqdhrjwahc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421207.4508715-303-277366317040920/AnsiballZ_file.py'
Jan 26 09:53:27 compute-0 sudo[170086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:27 compute-0 ceph-mon[74456]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:28 compute-0 python3.9[170088]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:28 compute-0 sudo[170086]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:28 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:28 compute-0 sudo[170240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvixestxnyeozmgnrqyfszjzmxhvtcqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421208.259482-303-96914492735516/AnsiballZ_file.py'
Jan 26 09:53:28 compute-0 sudo[170240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:28 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:28 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:28 compute-0 python3.9[170242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:28 compute-0 sudo[170240]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:28.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:29.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:29 compute-0 sudo[170392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcuuupfgtjrsdmxadhuhmjmxjraqahgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421209.020483-453-165335561805104/AnsiballZ_file.py'
Jan 26 09:53:29 compute-0 sudo[170392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:29 compute-0 python3.9[170394]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:29 compute-0 sudo[170392]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:29 compute-0 ceph-mon[74456]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:30 compute-0 sudo[170544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzxxmnnjndflwpuuymwxclkkhfdcrdme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421209.8823605-453-130691676916923/AnsiballZ_file.py'
Jan 26 09:53:30 compute-0 sudo[170544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:30 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c001ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:30 compute-0 python3.9[170547]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:30 compute-0 sudo[170544]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:30 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:30 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e54002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:30.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:31.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:31 compute-0 sudo[170701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezdxdbufnsvfosbidxokkputckjqccor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421210.7053854-453-178626575974340/AnsiballZ_file.py'
Jan 26 09:53:31 compute-0 sudo[170701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:31 compute-0 python3.9[170703]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:31 compute-0 sudo[170701]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:31 compute-0 sudo[170853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msjlnuwsrkmwhnsevzxkusbwbksnrgtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421211.5057678-453-251253529573515/AnsiballZ_file.py'
Jan 26 09:53:31 compute-0 sudo[170853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:31 compute-0 ceph-mon[74456]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:32 compute-0 python3.9[170855]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:32 compute-0 sudo[170853]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:32 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:32 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:32 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:32 compute-0 sudo[171007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjdixduyqczjfyxuakrppncwcmhvfifl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421212.3841686-453-3217170333089/AnsiballZ_file.py'
Jan 26 09:53:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:32 compute-0 sudo[171007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:32.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:33 compute-0 python3.9[171009]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:33 compute-0 sudo[171007]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:33.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:33 compute-0 sudo[171159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkxyprrosgznwxktisllrguaolbyibaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421213.2443743-453-248624475955234/AnsiballZ_file.py'
Jan 26 09:53:33 compute-0 sudo[171159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:53:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:33 compute-0 python3.9[171161]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:33 compute-0 sudo[171159]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:33 compute-0 ceph-mon[74456]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:34 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:34 compute-0 sudo[171313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnintuctsksjmpkhfyvnzbzmdcpvzwwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421214.0805411-453-213300176452390/AnsiballZ_file.py'
Jan 26 09:53:34 compute-0 sudo[171313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:34 compute-0 python3.9[171315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:53:34 compute-0 sudo[171313]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:34 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:34 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:34.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:35 compute-0 sudo[171465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raaloszrsxnlurnjbjlfxqfqsxbvmjkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421215.064731-606-147219015189123/AnsiballZ_command.py'
Jan 26 09:53:35 compute-0 sudo[171465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:35 compute-0 python3.9[171467]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:35 compute-0 sudo[171465]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:36 compute-0 ceph-mon[74456]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:36 compute-0 podman[171546]: 2026-01-26 09:53:36.186048873 +0000 UTC m=+0.112529350 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible)
Jan 26 09:53:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:36 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e540011e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:53:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:53:36 compute-0 python3.9[171648]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 09:53:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:36 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:36 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:36.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:37.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:53:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:37.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:53:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:37 compute-0 sudo[171748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:53:37 compute-0 sudo[171748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:37 compute-0 sudo[171748]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:37 compute-0 sudo[171823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dybavbkbdeajmgnnvgprpekmsrrdatqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421217.1400874-660-33442045657784/AnsiballZ_systemd_service.py'
Jan 26 09:53:37 compute-0 sudo[171823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:37 compute-0 python3.9[171825]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 09:53:37 compute-0 systemd[1]: Reloading.
Jan 26 09:53:37 compute-0 systemd-rc-local-generator[171851]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:53:37 compute-0 systemd-sysv-generator[171857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:53:38 compute-0 ceph-mon[74456]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:38 compute-0 sudo[171823]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:38 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:38 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e540011e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:38 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:38 compute-0 sudo[172012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqrbdkzozhjavpobcvwtpkabhkwmfuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421218.5140762-684-265279124512438/AnsiballZ_command.py'
Jan 26 09:53:38 compute-0 sudo[172012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:39 compute-0 python3.9[172014]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:39 compute-0 sudo[172012]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:39 compute-0 sudo[172165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmglzqzkzjebdtytyyyxaeyulzbtrtue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421219.3617332-684-61735840043228/AnsiballZ_command.py'
Jan 26 09:53:39 compute-0 sudo[172165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:39 compute-0 python3.9[172167]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:40 compute-0 sudo[172165]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:40 compute-0 ceph-mon[74456]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:40 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:40 compute-0 sudo[172320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hflbaujmknlfjwqeqqsmpghwvgmgenmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421220.2025485-684-129225033328000/AnsiballZ_command.py'
Jan 26 09:53:40 compute-0 sudo[172320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:40 compute-0 python3.9[172322]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:40 compute-0 sudo[172320]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:40 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:40 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e540011e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:41 compute-0 sudo[172473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhpukdxdtsmtruvhzszlmsilbjzrxfuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421220.8578835-684-28341692752984/AnsiballZ_command.py'
Jan 26 09:53:41 compute-0 sudo[172473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:41 compute-0 python3.9[172475]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:41 compute-0 sudo[172473]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:41 compute-0 sudo[172626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdeggvfbkkwyqjwkuccyhwgrrylvaboh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421221.5716908-684-235756492140554/AnsiballZ_command.py'
Jan 26 09:53:41 compute-0 sudo[172626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:42 compute-0 python3.9[172628]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:42 compute-0 sudo[172626]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:42 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:42 compute-0 ceph-mon[74456]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:42 compute-0 sudo[172781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhhlruvwvpsguorcxvfrjbhsacdjgxra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421222.3316631-684-257286060405493/AnsiballZ_command.py'
Jan 26 09:53:42 compute-0 sudo[172781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:42 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:42 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e540011e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:42 compute-0 python3.9[172783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:42.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:42 compute-0 sudo[172781]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:43 compute-0 sudo[172934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvinhjydxgolewpcqwdemazhwedlaaaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421223.141856-684-95697561096775/AnsiballZ_command.py'
Jan 26 09:53:43 compute-0 sudo[172934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:43 compute-0 python3.9[172936]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:53:43 compute-0 ceph-mon[74456]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:43 compute-0 sudo[172934]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:44 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:44 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:44 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:53:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:44.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:53:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:45 compute-0 ceph-mon[74456]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:46 compute-0 sudo[173089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzkrmztohxpnjdbiovpljjecplcwynyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421225.6120055-846-254782309200946/AnsiballZ_getent.py'
Jan 26 09:53:46 compute-0 sudo[173089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e54009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:46 compute-0 python3.9[173091]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 26 09:53:46 compute-0 sudo[173089]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:53:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:53:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:46 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:46.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:47.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:53:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:47 compute-0 sudo[173244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wannsafhwuctqjwjbcfdmdwlqvozwymn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421226.6773636-870-199257814076721/AnsiballZ_group.py'
Jan 26 09:53:47 compute-0 sudo[173244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:47 compute-0 python3.9[173246]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 09:53:47 compute-0 groupadd[173247]: group added to /etc/group: name=libvirt, GID=42473
Jan 26 09:53:47 compute-0 groupadd[173247]: group added to /etc/gshadow: name=libvirt
Jan 26 09:53:47 compute-0 groupadd[173247]: new group: name=libvirt, GID=42473
Jan 26 09:53:47 compute-0 sudo[173244]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:47 compute-0 ceph-mon[74456]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:48 compute-0 sudo[173402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iilgcmwirmydvkrnjkxuqeucmaxjgonw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421227.7490258-894-42368964175193/AnsiballZ_user.py'
Jan 26 09:53:48 compute-0 sudo[173402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:48 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:48 compute-0 python3.9[173405]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 09:53:48 compute-0 useradd[173408]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 09:53:48 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:53:48 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 09:53:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:53:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:48 compute-0 sudo[173402]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:48 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e54009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:53:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:53:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:53:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:53:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:48 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:53:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:53:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:53:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:48.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:49 compute-0 sudo[173565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezusopvidojkpqpuyrzjzlmvueecqoww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421229.15661-927-20919846327565/AnsiballZ_setup.py'
Jan 26 09:53:49 compute-0 sudo[173565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:49 compute-0 python3.9[173567]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:53:49 compute-0 ceph-mon[74456]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:50 compute-0 sudo[173565]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:50 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:50 compute-0 sudo[173651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhurxecyvcbvhojrciplxrajxxxgife ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421229.15661-927-20919846327565/AnsiballZ_dnf.py'
Jan 26 09:53:50 compute-0 sudo[173651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:53:50 compute-0 python3.9[173653]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:53:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:50 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:50 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:50.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:51 compute-0 ceph-mon[74456]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:53:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:52 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e54009990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:52 compute-0 sudo[173661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:53:52 compute-0 sudo[173661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:52 compute-0 sudo[173661]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:52 compute-0 sudo[173686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:53:52 compute-0 sudo[173686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:52 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e40000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:52 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e54009990 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:52.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:53 compute-0 sudo[173686]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:53:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:53:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:53:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:53:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:53:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:53:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:53:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:53:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:53:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:53:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:53:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:53:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:53:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:53:53 compute-0 sudo[173750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:53:53 compute-0 sudo[173750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:53 compute-0 sudo[173750]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:53 compute-0 sudo[173776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:53:53 compute-0 sudo[173776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:53 compute-0 podman[173774]: 2026-01-26 09:53:53.647745368 +0000 UTC m=+0.079866160 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 26 09:53:54 compute-0 ceph-mon[74456]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:53:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:53:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:53:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:53:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:53:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:53:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:53:54 compute-0 podman[173860]: 2026-01-26 09:53:54.119840365 +0000 UTC m=+0.071267823 container create 4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:53:54 compute-0 systemd[1]: Started libpod-conmon-4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4.scope.
Jan 26 09:53:54 compute-0 podman[173860]: 2026-01-26 09:53:54.093403536 +0000 UTC m=+0.044831014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:53:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:53:54 compute-0 podman[173860]: 2026-01-26 09:53:54.308412634 +0000 UTC m=+0.259840102 container init 4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:53:54 compute-0 podman[173860]: 2026-01-26 09:53:54.333333143 +0000 UTC m=+0.284760601 container start 4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 09:53:54 compute-0 podman[173860]: 2026-01-26 09:53:54.338430507 +0000 UTC m=+0.289857975 container attach 4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_booth, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:53:54 compute-0 modest_booth[173876]: 167 167
Jan 26 09:53:54 compute-0 systemd[1]: libpod-4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4.scope: Deactivated successfully.
Jan 26 09:53:54 compute-0 podman[173860]: 2026-01-26 09:53:54.343792738 +0000 UTC m=+0.295220196 container died 4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 09:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0165af4f9df18e511f7069543dd84eb66729c1512586525b4e906257815925e0-merged.mount: Deactivated successfully.
Jan 26 09:53:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:54 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:54 compute-0 podman[173860]: 2026-01-26 09:53:54.43964475 +0000 UTC m=+0.391072218 container remove 4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_booth, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:53:54 compute-0 systemd[1]: libpod-conmon-4d5172ed1bf0ab6c7fb9b21cbb873b1095afa65ae4c926f9e9ac5b15c63195d4.scope: Deactivated successfully.
Jan 26 09:53:54 compute-0 podman[173904]: 2026-01-26 09:53:54.625774235 +0000 UTC m=+0.055628240 container create b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:53:54 compute-0 systemd[1]: Started libpod-conmon-b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2.scope.
Jan 26 09:53:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:53:54.674 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:53:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:53:54.674 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:53:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:53:54.674 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:53:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6033e8bc75a78684f02d303bbe766129d8e6ef4a900be2dd8999143b9e811096/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:54 compute-0 podman[173904]: 2026-01-26 09:53:54.601906115 +0000 UTC m=+0.031760140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6033e8bc75a78684f02d303bbe766129d8e6ef4a900be2dd8999143b9e811096/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6033e8bc75a78684f02d303bbe766129d8e6ef4a900be2dd8999143b9e811096/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6033e8bc75a78684f02d303bbe766129d8e6ef4a900be2dd8999143b9e811096/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6033e8bc75a78684f02d303bbe766129d8e6ef4a900be2dd8999143b9e811096/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:54 compute-0 podman[173904]: 2026-01-26 09:53:54.711434517 +0000 UTC m=+0.141288542 container init b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:53:54 compute-0 podman[173904]: 2026-01-26 09:53:54.724753098 +0000 UTC m=+0.154607123 container start b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 09:53:54 compute-0 podman[173904]: 2026-01-26 09:53:54.728639531 +0000 UTC m=+0.158493546 container attach b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:53:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:54 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e40000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:54 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e30004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:54.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:55 compute-0 wizardly_lumiere[173920]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:53:55 compute-0 wizardly_lumiere[173920]: --> All data devices are unavailable
Jan 26 09:53:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:53:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:53:55 compute-0 systemd[1]: libpod-b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2.scope: Deactivated successfully.
Jan 26 09:53:55 compute-0 podman[173935]: 2026-01-26 09:53:55.163423363 +0000 UTC m=+0.031853393 container died b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 09:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6033e8bc75a78684f02d303bbe766129d8e6ef4a900be2dd8999143b9e811096-merged.mount: Deactivated successfully.
Jan 26 09:53:55 compute-0 podman[173935]: 2026-01-26 09:53:55.225538363 +0000 UTC m=+0.093968363 container remove b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:53:55 compute-0 systemd[1]: libpod-conmon-b41f9287fefd2a97aaf94b377ecba183f3180ce98d18f93dd9a6a9829ea6ade2.scope: Deactivated successfully.
Jan 26 09:53:55 compute-0 sudo[173776]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:55 compute-0 sudo[173951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:53:55 compute-0 sudo[173951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:55 compute-0 sudo[173951]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:55 compute-0 sudo[173976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:53:55 compute-0 sudo[173976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:55 compute-0 podman[174044]: 2026-01-26 09:53:55.935368617 +0000 UTC m=+0.068368387 container create 2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_turing, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:53:55 compute-0 systemd[1]: Started libpod-conmon-2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f.scope.
Jan 26 09:53:55 compute-0 podman[174044]: 2026-01-26 09:53:55.890994275 +0000 UTC m=+0.023994065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:53:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:53:56 compute-0 podman[174044]: 2026-01-26 09:53:56.026516284 +0000 UTC m=+0.159516064 container init 2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 09:53:56 compute-0 podman[174044]: 2026-01-26 09:53:56.038028567 +0000 UTC m=+0.171028327 container start 2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_turing, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 09:53:56 compute-0 podman[174044]: 2026-01-26 09:53:56.041670464 +0000 UTC m=+0.174670234 container attach 2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_turing, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 09:53:56 compute-0 intelligent_turing[174060]: 167 167
Jan 26 09:53:56 compute-0 systemd[1]: libpod-2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f.scope: Deactivated successfully.
Jan 26 09:53:56 compute-0 podman[174044]: 2026-01-26 09:53:56.048086813 +0000 UTC m=+0.181086573 container died 2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9fcaa34ae6721cb6f86fee2a7604ea9f0590c382c9fd30530c29d88ec60f8fc-merged.mount: Deactivated successfully.
Jan 26 09:53:56 compute-0 podman[174044]: 2026-01-26 09:53:56.096515842 +0000 UTC m=+0.229515602 container remove 2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:53:56 compute-0 ceph-mon[74456]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:56 compute-0 systemd[1]: libpod-conmon-2b730d736c8a722f7579b9a1467dad573ce18711303e0e8226a19d8886fb0f3f.scope: Deactivated successfully.
Jan 26 09:53:56 compute-0 podman[174083]: 2026-01-26 09:53:56.269802628 +0000 UTC m=+0.055530858 container create 6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 09:53:56 compute-0 systemd[1]: Started libpod-conmon-6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc.scope.
Jan 26 09:53:56 compute-0 podman[174083]: 2026-01-26 09:53:56.239062566 +0000 UTC m=+0.024790816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:53:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc35b2303e56ec0beafbfcb2c2d3ccebd8aec12a3c9c6a2403937e24bd46440/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc35b2303e56ec0beafbfcb2c2d3ccebd8aec12a3c9c6a2403937e24bd46440/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc35b2303e56ec0beafbfcb2c2d3ccebd8aec12a3c9c6a2403937e24bd46440/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc35b2303e56ec0beafbfcb2c2d3ccebd8aec12a3c9c6a2403937e24bd46440/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:56 compute-0 podman[174083]: 2026-01-26 09:53:56.377099901 +0000 UTC m=+0.162828151 container init 6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_neumann, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 09:53:56 compute-0 podman[174083]: 2026-01-26 09:53:56.387827554 +0000 UTC m=+0.173555784 container start 6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_neumann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 26 09:53:56 compute-0 podman[174083]: 2026-01-26 09:53:56.39068397 +0000 UTC m=+0.176412200 container attach 6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 09:53:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:56 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e5400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:56] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:53:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:53:56] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:53:56 compute-0 cool_neumann[174100]: {
Jan 26 09:53:56 compute-0 cool_neumann[174100]:     "0": [
Jan 26 09:53:56 compute-0 cool_neumann[174100]:         {
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "devices": [
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "/dev/loop3"
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             ],
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "lv_name": "ceph_lv0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "lv_size": "21470642176",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "name": "ceph_lv0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "tags": {
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.cluster_name": "ceph",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.crush_device_class": "",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.encrypted": "0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.osd_id": "0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.type": "block",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.vdo": "0",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:                 "ceph.with_tpm": "0"
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             },
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "type": "block",
Jan 26 09:53:56 compute-0 cool_neumann[174100]:             "vg_name": "ceph_vg0"
Jan 26 09:53:56 compute-0 cool_neumann[174100]:         }
Jan 26 09:53:56 compute-0 cool_neumann[174100]:     ]
Jan 26 09:53:56 compute-0 cool_neumann[174100]: }
Jan 26 09:53:56 compute-0 systemd[1]: libpod-6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc.scope: Deactivated successfully.
Jan 26 09:53:56 compute-0 podman[174083]: 2026-01-26 09:53:56.739560022 +0000 UTC m=+0.525288262 container died 6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc35b2303e56ec0beafbfcb2c2d3ccebd8aec12a3c9c6a2403937e24bd46440-merged.mount: Deactivated successfully.
Jan 26 09:53:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:56 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:56 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e3c0037a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:56 compute-0 podman[174083]: 2026-01-26 09:53:56.817606353 +0000 UTC m=+0.603334583 container remove 6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_neumann, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:53:56 compute-0 systemd[1]: libpod-conmon-6d1b5f792a83c6f1f91bf518311dc9d0db5340ce253d9f7bc48075c3497c36bc.scope: Deactivated successfully.
Jan 26 09:53:56 compute-0 sudo[173976]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:56 compute-0 sudo[174124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:53:56 compute-0 sudo[174124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:56 compute-0 sudo[174124]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:56.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:53:57.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:53:57 compute-0 sudo[174149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:53:57 compute-0 sudo[174149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:57.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:57 compute-0 sudo[174223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:53:57 compute-0 sudo[174223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:57 compute-0 sudo[174223]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:57 compute-0 podman[174227]: 2026-01-26 09:53:57.519896859 +0000 UTC m=+0.034792980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:53:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:53:57 compute-0 podman[174227]: 2026-01-26 09:53:57.838935873 +0000 UTC m=+0.353831924 container create 6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_darwin, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:53:57 compute-0 systemd[1]: Started libpod-conmon-6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23.scope.
Jan 26 09:53:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:53:57 compute-0 podman[174227]: 2026-01-26 09:53:57.940048104 +0000 UTC m=+0.454944205 container init 6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_darwin, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 09:53:57 compute-0 podman[174227]: 2026-01-26 09:53:57.952710728 +0000 UTC m=+0.467606789 container start 6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 09:53:57 compute-0 epic_darwin[174277]: 167 167
Jan 26 09:53:57 compute-0 systemd[1]: libpod-6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23.scope: Deactivated successfully.
Jan 26 09:53:57 compute-0 podman[174227]: 2026-01-26 09:53:57.977135152 +0000 UTC m=+0.492031203 container attach 6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 09:53:57 compute-0 podman[174227]: 2026-01-26 09:53:57.978982491 +0000 UTC m=+0.493878572 container died 6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 09:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-763f07703a60746aeed2eccd21e0c54eb56586884ec4887bc1346223c528a2f0-merged.mount: Deactivated successfully.
Jan 26 09:53:58 compute-0 podman[174227]: 2026-01-26 09:53:58.02854502 +0000 UTC m=+0.543441051 container remove 6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_darwin, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 09:53:58 compute-0 systemd[1]: libpod-conmon-6527f4da3895191e2a5b3ce696cfbf6d3f67dc179a6f0f9deb905c52fa081a23.scope: Deactivated successfully.
Jan 26 09:53:58 compute-0 ceph-mon[74456]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:58 compute-0 podman[174310]: 2026-01-26 09:53:58.23417558 +0000 UTC m=+0.071608481 container create dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 09:53:58 compute-0 systemd[1]: Started libpod-conmon-dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145.scope.
Jan 26 09:53:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a76b432a4e87f1a5fa21cc7d26c42616ab45ccde66f91d14d8085ee5136867b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a76b432a4e87f1a5fa21cc7d26c42616ab45ccde66f91d14d8085ee5136867b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:58 compute-0 podman[174310]: 2026-01-26 09:53:58.20918088 +0000 UTC m=+0.046613861 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a76b432a4e87f1a5fa21cc7d26c42616ab45ccde66f91d14d8085ee5136867b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a76b432a4e87f1a5fa21cc7d26c42616ab45ccde66f91d14d8085ee5136867b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:53:58 compute-0 podman[174310]: 2026-01-26 09:53:58.313862595 +0000 UTC m=+0.151295546 container init dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 09:53:58 compute-0 podman[174310]: 2026-01-26 09:53:58.321158037 +0000 UTC m=+0.158590938 container start dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 09:53:58 compute-0 podman[174310]: 2026-01-26 09:53:58.324408723 +0000 UTC m=+0.161841634 container attach dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 09:53:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e40001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e48001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:53:58 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:53:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:53:58 compute-0 lvm[174437]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:53:58 compute-0 lvm[174437]: VG ceph_vg0 finished
Jan 26 09:53:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:53:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:53:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:53:59 compute-0 relaxed_wing[174333]: {}
Jan 26 09:53:59 compute-0 systemd[1]: libpod-dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145.scope: Deactivated successfully.
Jan 26 09:53:59 compute-0 podman[174310]: 2026-01-26 09:53:59.042822113 +0000 UTC m=+0.880255014 container died dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:53:59 compute-0 systemd[1]: libpod-dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145.scope: Consumed 1.098s CPU time.
Jan 26 09:53:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a76b432a4e87f1a5fa21cc7d26c42616ab45ccde66f91d14d8085ee5136867b-merged.mount: Deactivated successfully.
Jan 26 09:53:59 compute-0 podman[174310]: 2026-01-26 09:53:59.086939919 +0000 UTC m=+0.924372820 container remove dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_wing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 09:53:59 compute-0 systemd[1]: libpod-conmon-dbe5efab4964dc2bd5ea2bb0cec4fe08885606a7ce9482e36c0db1d71884d145.scope: Deactivated successfully.
Jan 26 09:53:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:53:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:53:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:53:59.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:53:59 compute-0 sudo[174149]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:53:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:53:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:53:59 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:53:59 compute-0 sshd-session[174422]: Invalid user test from 157.245.76.178 port 39998
Jan 26 09:53:59 compute-0 sudo[174459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:53:59 compute-0 sudo[174459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:53:59 compute-0 sudo[174459]: pam_unix(sudo:session): session closed for user root
Jan 26 09:53:59 compute-0 sshd-session[174422]: Connection closed by invalid user test 157.245.76.178 port 39998 [preauth]
Jan 26 09:54:00 compute-0 ceph-mon[74456]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:00 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:54:00 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:54:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:00 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e5400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:00 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e40001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:00 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e480023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:54:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:00.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:02 compute-0 ceph-mon[74456]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:54:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:02 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:02 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e5400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:02 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28002830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:02.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:03.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:54:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:04 compute-0 ceph-mon[74456]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:04 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e480023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:04 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e2c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:04 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e5400aa30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:04.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:05.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:06 compute-0 kernel: ganesha.nfsd[174593]: segfault at 50 ip 00007f7ed6e1f32e sp 00007f7e3b7fd210 error 4 in libntirpc.so.5.8[7f7ed6e04000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 26 09:54:06 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:54:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[165303]: 26/01/2026 09:54:06 : epoch 6977396e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7e28002830 fd 38 proxy ignored for local
Jan 26 09:54:06 compute-0 systemd[1]: Started Process Core Dump (PID 174599/UID 0).
Jan 26 09:54:06 compute-0 ceph-mon[74456]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:06 compute-0 podman[174600]: 2026-01-26 09:54:06.576778989 +0000 UTC m=+0.117832332 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:54:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:54:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:54:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:54:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:07.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:54:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:07.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:54:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:07.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:54:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:07.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:07 compute-0 ceph-mon[74456]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:07 compute-0 systemd-coredump[174601]: Process 165319 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f7ed6e1f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    #1  0x0000000000000000 n/a (n/a + 0x0)
                                                    #2  0x00007f7ed6e29900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:54:07 compute-0 systemd[1]: systemd-coredump@5-174599-0.service: Deactivated successfully.
Jan 26 09:54:07 compute-0 systemd[1]: systemd-coredump@5-174599-0.service: Consumed 1.243s CPU time.
Jan 26 09:54:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:07 compute-0 podman[174632]: 2026-01-26 09:54:07.86018303 +0000 UTC m=+0.035647093 container died 319680f311a6bff548a31262b5a1ee997ea75b437bea73f9338929472e4b9256 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a509f8521ed7fd639d7ee7e093abad1c6697bc51383586744c85cd90bf77f7-merged.mount: Deactivated successfully.
Jan 26 09:54:07 compute-0 podman[174632]: 2026-01-26 09:54:07.953105143 +0000 UTC m=+0.128569136 container remove 319680f311a6bff548a31262b5a1ee997ea75b437bea73f9338929472e4b9256 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 09:54:07 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:54:08 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:54:08 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.540s CPU time.
Jan 26 09:54:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:09.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:09.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:10 compute-0 ceph-mon[74456]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095410 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:54:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:54:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:11.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:54:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:11.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:54:12 compute-0 ceph-mon[74456]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:54:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095412 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:54:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:13.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:13.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:14 compute-0 ceph-mon[74456]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:15.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:54:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:15.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:54:16 compute-0 ceph-mon[74456]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:54:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:54:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:17.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:17.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:54:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:17.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:17 compute-0 ceph-mon[74456]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:17 compute-0 sudo[174690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:54:17 compute-0 sudo[174690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:54:17 compute-0 sudo[174690]: pam_unix(sudo:session): session closed for user root
Jan 26 09:54:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:18 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 6.
Jan 26 09:54:18 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:54:18 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.540s CPU time.
Jan 26 09:54:18 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:54:18
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control', '.nfs', 'default.rgw.log']
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:54:18 compute-0 podman[174769]: 2026-01-26 09:54:18.52386892 +0000 UTC m=+0.024405876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:54:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:54:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:54:18 compute-0 podman[174769]: 2026-01-26 09:54:18.789424883 +0000 UTC m=+0.289961789 container create 3cb211a4d50da5fd8c302603652427074bc04b8b57371ba838a1193823978f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:54:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:54:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a2cb16f2e83d2a20addd54196ac34dfc4c057648f7669ffa37f15a82827ce6/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a2cb16f2e83d2a20addd54196ac34dfc4c057648f7669ffa37f15a82827ce6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a2cb16f2e83d2a20addd54196ac34dfc4c057648f7669ffa37f15a82827ce6/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a2cb16f2e83d2a20addd54196ac34dfc4c057648f7669ffa37f15a82827ce6/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:54:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:19.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:19 compute-0 podman[174769]: 2026-01-26 09:54:19.084262908 +0000 UTC m=+0.584799814 container init 3cb211a4d50da5fd8c302603652427074bc04b8b57371ba838a1193823978f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:54:19 compute-0 podman[174769]: 2026-01-26 09:54:19.090986566 +0000 UTC m=+0.591523442 container start 3cb211a4d50da5fd8c302603652427074bc04b8b57371ba838a1193823978f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:54:19 compute-0 bash[174769]: 3cb211a4d50da5fd8c302603652427074bc04b8b57371ba838a1193823978f92
Jan 26 09:54:19 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:54:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:19.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:19 compute-0 kernel: SELinux:  Converting 2782 SID table entries...
Jan 26 09:54:19 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:54:19 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 09:54:19 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:54:19 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:54:19 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:54:19 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:54:19 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:54:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:19 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:54:20 compute-0 ceph-mon[74456]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:54:20 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 09:54:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:54:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:54:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:21.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:54:22 compute-0 ceph-mon[74456]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:54:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:54:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:23.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:54:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:54:24 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 26 09:54:24 compute-0 podman[174835]: 2026-01-26 09:54:24.131074947 +0000 UTC m=+0.056338359 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 09:54:24 compute-0 ceph-mon[74456]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:54:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 09:54:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:25.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:25.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:25 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:54:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:25 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:54:26 compute-0 ceph-mon[74456]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 09:54:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:26] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:54:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:26] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:54:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 09:54:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:27.015Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:54:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:27.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:54:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:27.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:27.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:28 compute-0 ceph-mon[74456]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 09:54:28 compute-0 kernel: SELinux:  Converting 2782 SID table entries...
Jan 26 09:54:28 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:54:28 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 09:54:28 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:54:28 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:54:28 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:54:28 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:54:28 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:54:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 09:54:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:29.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:29.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:30 compute-0 ceph-mon[74456]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 09:54:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 26 09:54:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:31.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:54:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:31 : epoch 697739cb : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:54:31 compute-0 ceph-mon[74456]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 26 09:54:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:32 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5614000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095432 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 51ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:54:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:54:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:32 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f560c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:32 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f560c002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:33.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:33.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:54:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:34 compute-0 ceph-mon[74456]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:54:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095434 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:54:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:34 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5614000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:54:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:34 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:34 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4000d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:54:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:35.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:54:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:54:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:35.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:54:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:36 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:36 compute-0 ceph-mon[74456]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:54:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:36] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 26 09:54:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:36] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 26 09:54:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:54:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:36 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:36 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:37.017Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:54:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:37.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:54:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:37.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:37 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 26 09:54:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:37.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:37 compute-0 podman[174892]: 2026-01-26 09:54:37.274805927 +0000 UTC m=+0.189940707 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 09:54:37 compute-0 ceph-mon[74456]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:54:37 compute-0 sudo[174920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:54:37 compute-0 sudo[174920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:54:37 compute-0 sudo[174920]: pam_unix(sudo:session): session closed for user root
Jan 26 09:54:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:38 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f0000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:54:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:38 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:38 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:39.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:39.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:39 compute-0 ceph-mon[74456]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:54:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:40 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:40 compute-0 sshd-session[174947]: Invalid user test from 157.245.76.178 port 43268
Jan 26 09:54:40 compute-0 sshd-session[174947]: Connection closed by invalid user test 157.245.76.178 port 43268 [preauth]
Jan 26 09:54:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:54:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:40 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:40 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:41.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:41.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=cleanup t=2026-01-26T09:54:41.480854284Z level=info msg="Completed cleanup jobs" duration=81.737448ms
Jan 26 09:54:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafana.update.checker t=2026-01-26T09:54:41.520656315Z level=info msg="Update check succeeded" duration=50.487203ms
Jan 26 09:54:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=plugins.update.checker t=2026-01-26T09:54:41.524904698Z level=info msg="Update check succeeded" duration=54.782598ms
Jan 26 09:54:42 compute-0 ceph-mon[74456]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:54:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:42 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f40020e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:54:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:42 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:42 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:44 compute-0 ceph-mon[74456]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:54:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:44 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec001e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:54:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:44 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4002280 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:44 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:45.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:46 compute-0 ceph-mon[74456]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:54:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:46 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:46] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 26 09:54:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:46] "GET /metrics HTTP/1.1" 200 48275 "" "Prometheus/2.51.0"
Jan 26 09:54:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:46 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec001e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:46 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f40030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:47.018Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:54:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:47.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:54:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:47.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:48 compute-0 ceph-mon[74456]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:48 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:54:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:54:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:54:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:54:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:54:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:54:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:54:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:48 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:48 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:49.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:54:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:50 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4003220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:54:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:51 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4002f00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:51 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:51 compute-0 ceph-mon[74456]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:51.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:54:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:54:52 compute-0 ceph-mon[74456]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:54:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:52 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:53 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56180013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:53 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56180013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:54 compute-0 ceph-mon[74456]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:54 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:54:54.675 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:54:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:54:54.676 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:54:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:54:54.676 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:54:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:54:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:55 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:55 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:55.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:55 compute-0 podman[182186]: 2026-01-26 09:54:55.119803375 +0000 UTC m=+0.053257822 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 09:54:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:54:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:55.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:54:56 compute-0 ceph-mon[74456]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:54:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:56 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5618002530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:56] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:54:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:54:56] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 09:54:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:57 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:57 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:54:57.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:54:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:57.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:57.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:54:57 compute-0 sudo[183874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:54:57 compute-0 sudo[183874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:54:57 compute-0 sudo[183874]: pam_unix(sudo:session): session closed for user root
Jan 26 09:54:58 compute-0 ceph-mon[74456]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:58 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:54:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:59 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:54:59 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:54:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:54:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:54:59.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:54:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:54:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:54:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:54:59.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:54:59 compute-0 sudo[184966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:54:59 compute-0 sudo[184966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:54:59 compute-0 sudo[184966]: pam_unix(sudo:session): session closed for user root
Jan 26 09:54:59 compute-0 sudo[185035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:54:59 compute-0 sudo[185035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:00 compute-0 sudo[185035]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 09:55:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:55:00 compute-0 ceph-mon[74456]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:00 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 09:55:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:00 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:55:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:01 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5600003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:01 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:01.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:55:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:02 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:55:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:55:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:55:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:03 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:03 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5618002530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:03.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:03 compute-0 sudo[187296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:55:03 compute-0 sudo[187296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:03 compute-0 sudo[187296]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:03 compute-0 sudo[187356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:55:03 compute-0 sudo[187356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:55:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:55:03 compute-0 podman[187660]: 2026-01-26 09:55:03.566582655 +0000 UTC m=+0.047779978 container create 5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:55:03 compute-0 systemd[1]: Started libpod-conmon-5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6.scope.
Jan 26 09:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:55:03 compute-0 podman[187660]: 2026-01-26 09:55:03.543509504 +0000 UTC m=+0.024706877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:55:03 compute-0 podman[187660]: 2026-01-26 09:55:03.697593738 +0000 UTC m=+0.178791091 container init 5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:55:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:55:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:03 compute-0 podman[187660]: 2026-01-26 09:55:03.705975042 +0000 UTC m=+0.187172375 container start 5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 09:55:03 compute-0 podman[187660]: 2026-01-26 09:55:03.710467816 +0000 UTC m=+0.191665159 container attach 5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carver, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:55:03 compute-0 mystifying_carver[187747]: 167 167
Jan 26 09:55:03 compute-0 systemd[1]: libpod-5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6.scope: Deactivated successfully.
Jan 26 09:55:03 compute-0 podman[187660]: 2026-01-26 09:55:03.712843115 +0000 UTC m=+0.194040438 container died 5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2888fff8d45444347fa4dc883ae61f1a4ff9c6c963770cc07b2936b1db108f0-merged.mount: Deactivated successfully.
Jan 26 09:55:03 compute-0 podman[187660]: 2026-01-26 09:55:03.938371839 +0000 UTC m=+0.419569162 container remove 5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_carver, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:55:03 compute-0 systemd[1]: libpod-conmon-5a01e911cb1539d0469a60d50419ee1d588393ae6377ba21381cdc1de69078e6.scope: Deactivated successfully.
Jan 26 09:55:04 compute-0 podman[188071]: 2026-01-26 09:55:04.148810098 +0000 UTC m=+0.094296228 container create 77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:55:04 compute-0 podman[188071]: 2026-01-26 09:55:04.083978316 +0000 UTC m=+0.029464476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:55:04 compute-0 systemd[1]: Started libpod-conmon-77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924.scope.
Jan 26 09:55:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1980beb2297fa2a9b68b5d25ac616ddcd7a2ba071a5919272d723779a063748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1980beb2297fa2a9b68b5d25ac616ddcd7a2ba071a5919272d723779a063748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1980beb2297fa2a9b68b5d25ac616ddcd7a2ba071a5919272d723779a063748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1980beb2297fa2a9b68b5d25ac616ddcd7a2ba071a5919272d723779a063748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1980beb2297fa2a9b68b5d25ac616ddcd7a2ba071a5919272d723779a063748/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:04 compute-0 podman[188071]: 2026-01-26 09:55:04.340848143 +0000 UTC m=+0.286334293 container init 77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_golick, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:55:04 compute-0 podman[188071]: 2026-01-26 09:55:04.348132875 +0000 UTC m=+0.293619005 container start 77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_golick, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:55:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:04 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:04 compute-0 podman[188071]: 2026-01-26 09:55:04.537658558 +0000 UTC m=+0.483144698 container attach 77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_golick, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:55:04 compute-0 upbeat_golick[188184]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:55:04 compute-0 upbeat_golick[188184]: --> All data devices are unavailable
Jan 26 09:55:04 compute-0 systemd[1]: libpod-77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924.scope: Deactivated successfully.
Jan 26 09:55:04 compute-0 podman[188071]: 2026-01-26 09:55:04.749625139 +0000 UTC m=+0.695111279 container died 77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:55:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1980beb2297fa2a9b68b5d25ac616ddcd7a2ba071a5919272d723779a063748-merged.mount: Deactivated successfully.
Jan 26 09:55:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:05 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55ec003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:05 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56000041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:05.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:05 compute-0 podman[188071]: 2026-01-26 09:55:05.08005868 +0000 UTC m=+1.025544810 container remove 77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:55:05 compute-0 sudo[187356]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:05 compute-0 systemd[1]: libpod-conmon-77620e2e1a7134b71ab8d13d384474102a1ab4aa97b40ce14d66b17ce58c5924.scope: Deactivated successfully.
Jan 26 09:55:05 compute-0 sudo[188786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:55:05 compute-0 sudo[188786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:05 compute-0 sudo[188786]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:05.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:05 compute-0 sudo[188854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:55:05 compute-0 sudo[188854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:05 compute-0 podman[189173]: 2026-01-26 09:55:05.628967649 +0000 UTC m=+0.035190145 container create 941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kilby, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:55:05 compute-0 systemd[1]: Started libpod-conmon-941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea.scope.
Jan 26 09:55:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:55:05 compute-0 podman[189173]: 2026-01-26 09:55:05.613927345 +0000 UTC m=+0.020149871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:55:05 compute-0 podman[189173]: 2026-01-26 09:55:05.751070815 +0000 UTC m=+0.157293341 container init 941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:55:05 compute-0 podman[189173]: 2026-01-26 09:55:05.758298526 +0000 UTC m=+0.164521032 container start 941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:55:05 compute-0 podman[189173]: 2026-01-26 09:55:05.763543765 +0000 UTC m=+0.169766291 container attach 941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:55:05 compute-0 funny_kilby[189243]: 167 167
Jan 26 09:55:05 compute-0 systemd[1]: libpod-941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea.scope: Deactivated successfully.
Jan 26 09:55:05 compute-0 podman[189173]: 2026-01-26 09:55:05.768176162 +0000 UTC m=+0.174398668 container died 941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kilby, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c06a95998e7de7289f4b2f59701dbc39d2daed6edf94ab89a691e7f01b1dcd72-merged.mount: Deactivated successfully.
Jan 26 09:55:05 compute-0 podman[189173]: 2026-01-26 09:55:05.890061164 +0000 UTC m=+0.296283670 container remove 941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:55:05 compute-0 ceph-mon[74456]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:05 compute-0 systemd[1]: libpod-conmon-941107645d227993d86c7d944d0b29bdb4c8b96a82f6230c1f1bdc073c6d6aea.scope: Deactivated successfully.
Jan 26 09:55:06 compute-0 podman[189530]: 2026-01-26 09:55:06.061893008 +0000 UTC m=+0.029668650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:55:06 compute-0 podman[189530]: 2026-01-26 09:55:06.155270616 +0000 UTC m=+0.123046208 container create 5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:55:06 compute-0 systemd[1]: Started libpod-conmon-5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36.scope.
Jan 26 09:55:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45204b5da2327bad604223380f23f81795462e7b6b6e9223a8c1f438d3f165c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45204b5da2327bad604223380f23f81795462e7b6b6e9223a8c1f438d3f165c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45204b5da2327bad604223380f23f81795462e7b6b6e9223a8c1f438d3f165c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45204b5da2327bad604223380f23f81795462e7b6b6e9223a8c1f438d3f165c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:06 compute-0 podman[189530]: 2026-01-26 09:55:06.266873863 +0000 UTC m=+0.234649475 container init 5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:55:06 compute-0 podman[189530]: 2026-01-26 09:55:06.275577225 +0000 UTC m=+0.243352857 container start 5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:55:06 compute-0 podman[189530]: 2026-01-26 09:55:06.279981097 +0000 UTC m=+0.247756719 container attach 5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:55:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:06 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5618002530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:06 compute-0 agitated_tesla[189655]: {
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:     "0": [
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:         {
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "devices": [
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "/dev/loop3"
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             ],
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "lv_name": "ceph_lv0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "lv_size": "21470642176",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "name": "ceph_lv0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "tags": {
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.cluster_name": "ceph",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.crush_device_class": "",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.encrypted": "0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.osd_id": "0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.type": "block",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.vdo": "0",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:                 "ceph.with_tpm": "0"
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             },
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "type": "block",
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:             "vg_name": "ceph_vg0"
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:         }
Jan 26 09:55:06 compute-0 agitated_tesla[189655]:     ]
Jan 26 09:55:06 compute-0 agitated_tesla[189655]: }
Jan 26 09:55:06 compute-0 systemd[1]: libpod-5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36.scope: Deactivated successfully.
Jan 26 09:55:06 compute-0 podman[189530]: 2026-01-26 09:55:06.587827347 +0000 UTC m=+0.555602959 container died 5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:55:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:06] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:06] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-45204b5da2327bad604223380f23f81795462e7b6b6e9223a8c1f438d3f165c0-merged.mount: Deactivated successfully.
Jan 26 09:55:06 compute-0 podman[189530]: 2026-01-26 09:55:06.731091895 +0000 UTC m=+0.698867507 container remove 5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tesla, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 09:55:06 compute-0 systemd[1]: libpod-conmon-5f584c8b3586536012d2f4df4bd93bf48e7887402d4e0a69db1119f1bd219e36.scope: Deactivated successfully.
Jan 26 09:55:06 compute-0 sudo[188854]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:06 compute-0 sudo[190004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:55:06 compute-0 sudo[190004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:06 compute-0 sudo[190004]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:06 compute-0 sudo[190069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:55:06 compute-0 sudo[190069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:06 compute-0 ceph-mon[74456]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:55:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:07 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:07.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:55:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:07 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:07.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:55:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:07.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:07 compute-0 podman[190404]: 2026-01-26 09:55:07.328545996 +0000 UTC m=+0.037159117 container create 97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_germain, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 09:55:07 compute-0 systemd[1]: Started libpod-conmon-97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3.scope.
Jan 26 09:55:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:55:07 compute-0 podman[190404]: 2026-01-26 09:55:07.382667875 +0000 UTC m=+0.091281026 container init 97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_germain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:55:07 compute-0 podman[190404]: 2026-01-26 09:55:07.38914701 +0000 UTC m=+0.097760131 container start 97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:55:07 compute-0 quirky_germain[190473]: 167 167
Jan 26 09:55:07 compute-0 systemd[1]: libpod-97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3.scope: Deactivated successfully.
Jan 26 09:55:07 compute-0 podman[190404]: 2026-01-26 09:55:07.313685336 +0000 UTC m=+0.022298487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:55:07 compute-0 podman[190404]: 2026-01-26 09:55:07.494102749 +0000 UTC m=+0.202715870 container attach 97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_germain, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 09:55:07 compute-0 podman[190404]: 2026-01-26 09:55:07.494456566 +0000 UTC m=+0.203069687 container died 97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_germain, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-35a939679e66a462b2b2a68bf55960505d2c3747c9e9d826091ff1d10b445a2b-merged.mount: Deactivated successfully.
Jan 26 09:55:07 compute-0 podman[190404]: 2026-01-26 09:55:07.530655591 +0000 UTC m=+0.239268722 container remove 97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_germain, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:55:07 compute-0 systemd[1]: libpod-conmon-97aeccfff362fedb4e781434a1a663a78e05fd5a5a88868dc332834454a3ffb3.scope: Deactivated successfully.
Jan 26 09:55:07 compute-0 podman[190456]: 2026-01-26 09:55:07.623562419 +0000 UTC m=+0.258947371 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 09:55:07 compute-0 podman[190736]: 2026-01-26 09:55:07.752073299 +0000 UTC m=+0.107927352 container create 7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 09:55:07 compute-0 podman[190736]: 2026-01-26 09:55:07.665517824 +0000 UTC m=+0.021371907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:55:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:07 compute-0 systemd[1]: Started libpod-conmon-7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773.scope.
Jan 26 09:55:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9729a5a8465ee63ab9a29b360d17e78faadf8bd3afdc693eba35e392c57ed4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9729a5a8465ee63ab9a29b360d17e78faadf8bd3afdc693eba35e392c57ed4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9729a5a8465ee63ab9a29b360d17e78faadf8bd3afdc693eba35e392c57ed4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9729a5a8465ee63ab9a29b360d17e78faadf8bd3afdc693eba35e392c57ed4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:08 compute-0 podman[190736]: 2026-01-26 09:55:08.048461401 +0000 UTC m=+0.404315474 container init 7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hertz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:55:08 compute-0 podman[190736]: 2026-01-26 09:55:08.056853256 +0000 UTC m=+0.412707299 container start 7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hertz, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:55:08 compute-0 podman[190736]: 2026-01-26 09:55:08.060293887 +0000 UTC m=+0.416147950 container attach 7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hertz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 09:55:08 compute-0 ceph-mon[74456]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:08 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56000041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:08 compute-0 lvm[191560]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:55:08 compute-0 lvm[191560]: VG ceph_vg0 finished
Jan 26 09:55:08 compute-0 sleepy_hertz[190900]: {}
Jan 26 09:55:08 compute-0 systemd[1]: libpod-7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773.scope: Deactivated successfully.
Jan 26 09:55:08 compute-0 systemd[1]: libpod-7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773.scope: Consumed 1.057s CPU time.
Jan 26 09:55:08 compute-0 podman[190736]: 2026-01-26 09:55:08.739949693 +0000 UTC m=+1.095803746 container died 7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:55:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:09 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56000041f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:09 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4003b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:09.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9729a5a8465ee63ab9a29b360d17e78faadf8bd3afdc693eba35e392c57ed4c-merged.mount: Deactivated successfully.
Jan 26 09:55:09 compute-0 podman[190736]: 2026-01-26 09:55:09.5937469 +0000 UTC m=+1.949600953 container remove 7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 09:55:09 compute-0 systemd[1]: libpod-conmon-7d83dec9ebe716c838f9e1ae4b57dc4bb895826397da7c14bb3ec7ff10dc1773.scope: Deactivated successfully.
Jan 26 09:55:09 compute-0 sudo[190069]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:55:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:10 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:55:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:55:10 compute-0 ceph-mon[74456]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:11 compute-0 sudo[192567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:55:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:11 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56000041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:11 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f0001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:11 compute-0 sudo[192567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:11 compute-0 sudo[192567]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:11.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:12 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55f4003b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:12 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:12 compute-0 ceph-mon[74456]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:55:12 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:55:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:13 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5614001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:13 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f55e4003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:13.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:13 compute-0 ceph-mon[74456]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[174785]: 26/01/2026 09:55:14 : epoch 697739cb : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5614001340 fd 38 proxy ignored for local
Jan 26 09:55:14 compute-0 kernel: ganesha.nfsd[190196]: segfault at 50 ip 00007f569ff9b32e sp 00007f56267fb210 error 4 in libntirpc.so.5.8[7f569ff80000+2c000] likely on CPU 1 (core 0, socket 1)
Jan 26 09:55:14 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:55:14 compute-0 systemd[1]: Started Process Core Dump (PID 192613/UID 0).
Jan 26 09:55:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:55:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:15.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:15.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:16] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:16] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:16 compute-0 systemd-coredump[192614]: Process 174789 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007f569ff9b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:55:16 compute-0 ceph-mon[74456]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:55:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:16 compute-0 systemd[1]: systemd-coredump@6-192613-0.service: Deactivated successfully.
Jan 26 09:55:16 compute-0 systemd[1]: systemd-coredump@6-192613-0.service: Consumed 1.460s CPU time.
Jan 26 09:55:16 compute-0 podman[192626]: 2026-01-26 09:55:16.908744665 +0000 UTC m=+0.032555850 container died 3cb211a4d50da5fd8c302603652427074bc04b8b57371ba838a1193823978f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 09:55:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a2cb16f2e83d2a20addd54196ac34dfc4c057648f7669ffa37f15a82827ce6-merged.mount: Deactivated successfully.
Jan 26 09:55:16 compute-0 podman[192626]: 2026-01-26 09:55:16.953928007 +0000 UTC m=+0.077739202 container remove 3cb211a4d50da5fd8c302603652427074bc04b8b57371ba838a1193823978f92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 09:55:16 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:55:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:17.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:55:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:17.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:17 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:55:17 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.804s CPU time.
Jan 26 09:55:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:17 compute-0 ceph-mon[74456]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:17 compute-0 sudo[192668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:55:17 compute-0 sudo[192668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:17 compute-0 sudo[192668]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:55:18
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'vms', 'images', 'default.rgw.control', 'backups']
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:55:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:55:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:55:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:55:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:55:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:19.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:19.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:19 compute-0 ceph-mon[74456]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:55:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:21.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:22 compute-0 ceph-mon[74456]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:55:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095522 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:55:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:23.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:23.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:23 compute-0 kernel: SELinux:  Converting 2783 SID table entries...
Jan 26 09:55:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 09:55:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 09:55:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 09:55:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 09:55:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 09:55:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 09:55:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 09:55:23 compute-0 sshd-session[192700]: Invalid user oracle from 157.245.76.178 port 47826
Jan 26 09:55:24 compute-0 sshd-session[192700]: Connection closed by invalid user oracle 157.245.76.178 port 47826 [preauth]
Jan 26 09:55:24 compute-0 ceph-mon[74456]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:24 compute-0 groupadd[192715]: group added to /etc/group: name=dnsmasq, GID=992
Jan 26 09:55:24 compute-0 groupadd[192715]: group added to /etc/gshadow: name=dnsmasq
Jan 26 09:55:24 compute-0 groupadd[192715]: new group: name=dnsmasq, GID=992
Jan 26 09:55:24 compute-0 useradd[192722]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 26 09:55:25 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 26 09:55:25 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 26 09:55:25 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 26 09:55:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:25.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:26 compute-0 podman[192733]: 2026-01-26 09:55:26.128903234 +0000 UTC m=+0.052310542 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 26 09:55:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:26] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:26] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:26 compute-0 groupadd[192755]: group added to /etc/group: name=clevis, GID=991
Jan 26 09:55:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:55:26 compute-0 groupadd[192755]: group added to /etc/gshadow: name=clevis
Jan 26 09:55:26 compute-0 groupadd[192755]: new group: name=clevis, GID=991
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:27.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:55:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:27.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:27 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 7.
Jan 26 09:55:27 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:55:27 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.804s CPU time.
Jan 26 09:55:27 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:55:27 compute-0 useradd[192764]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 26 09:55:27 compute-0 podman[192815]: 2026-01-26 09:55:27.572470082 +0000 UTC m=+0.023903820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:55:27 compute-0 podman[192815]: 2026-01-26 09:55:27.817461381 +0000 UTC m=+0.268895069 container create 0041237f0bd72d492d807afb602e86f822e62edd08cccd96fef4d8275536f1db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:55:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3318b966f25794610fc7f2c252c92b07c8f469a45108d8c5b900bf1cd72323a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3318b966f25794610fc7f2c252c92b07c8f469a45108d8c5b900bf1cd72323a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3318b966f25794610fc7f2c252c92b07c8f469a45108d8c5b900bf1cd72323a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3318b966f25794610fc7f2c252c92b07c8f469a45108d8c5b900bf1cd72323a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:55:27 compute-0 usermod[192841]: add 'clevis' to group 'tss'
Jan 26 09:55:27 compute-0 usermod[192841]: add 'clevis' to shadow group 'tss'
Jan 26 09:55:27 compute-0 ceph-mon[74456]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:27 compute-0 podman[192815]: 2026-01-26 09:55:27.900430412 +0000 UTC m=+0.351864080 container init 0041237f0bd72d492d807afb602e86f822e62edd08cccd96fef4d8275536f1db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:55:27 compute-0 podman[192815]: 2026-01-26 09:55:27.907670503 +0000 UTC m=+0.359104151 container start 0041237f0bd72d492d807afb602e86f822e62edd08cccd96fef4d8275536f1db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 09:55:27 compute-0 bash[192815]: 0041237f0bd72d492d807afb602e86f822e62edd08cccd96fef4d8275536f1db
Jan 26 09:55:27 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:27 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:27 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:27 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:27 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:27 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:27 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:55:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:27 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:55:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:28 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:55:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:55:28 compute-0 ceph-mon[74456]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:55:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:29.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:30 compute-0 ceph-mon[74456]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:55:30 compute-0 polkitd[43452]: Reloading rules
Jan 26 09:55:30 compute-0 polkitd[43452]: Collecting garbage unconditionally...
Jan 26 09:55:30 compute-0 polkitd[43452]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 09:55:30 compute-0 polkitd[43452]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 09:55:30 compute-0 polkitd[43452]: Finished loading, compiling and executing 3 rules
Jan 26 09:55:30 compute-0 polkitd[43452]: Reloading rules
Jan 26 09:55:30 compute-0 polkitd[43452]: Collecting garbage unconditionally...
Jan 26 09:55:30 compute-0 polkitd[43452]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 09:55:30 compute-0 polkitd[43452]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 09:55:30 compute-0 polkitd[43452]: Finished loading, compiling and executing 3 rules
Jan 26 09:55:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:55:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:31.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:31 compute-0 groupadd[193075]: group added to /etc/group: name=ceph, GID=167
Jan 26 09:55:31 compute-0 groupadd[193075]: group added to /etc/gshadow: name=ceph
Jan 26 09:55:31 compute-0 groupadd[193075]: new group: name=ceph, GID=167
Jan 26 09:55:31 compute-0 useradd[193081]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 26 09:55:32 compute-0 ceph-mon[74456]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:55:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:55:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:33.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:55:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:34 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:55:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:34 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:55:34 compute-0 ceph-mon[74456]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:55:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:55:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:35.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:36 compute-0 sshd[1008]: Received signal 15; terminating.
Jan 26 09:55:36 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 26 09:55:36 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 26 09:55:36 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 26 09:55:36 compute-0 systemd[1]: sshd.service: Consumed 4.595s CPU time, read 32.0K from disk, written 92.0K to disk.
Jan 26 09:55:36 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 26 09:55:36 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 26 09:55:36 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 09:55:36 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 09:55:36 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 09:55:36 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 26 09:55:36 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 26 09:55:36 compute-0 sshd[193780]: Server listening on 0.0.0.0 port 22.
Jan 26 09:55:36 compute-0 sshd[193780]: Server listening on :: port 22.
Jan 26 09:55:36 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 26 09:55:36 compute-0 ceph-mon[74456]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:55:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:55:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:55:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:55:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:37.025Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:55:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:37.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:55:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:37.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:37 compute-0 podman[193907]: 2026-01-26 09:55:37.790137026 +0000 UTC m=+0.109087376 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 09:55:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:38 compute-0 sudo[193969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:55:38 compute-0 sudo[193969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:38 compute-0 sudo[193969]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:38 compute-0 ceph-mon[74456]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:55:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:55:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:55:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:55:38 compute-0 systemd[1]: Reloading.
Jan 26 09:55:39 compute-0 systemd-rc-local-generator[194085]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:39 compute-0 systemd-sysv-generator[194094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:39.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:55:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:39 compute-0 auditd[705]: Audit daemon rotating log files
Jan 26 09:55:39 compute-0 ceph-mon[74456]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:55:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:40 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9884000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:55:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:41 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9888001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:41 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9860000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000020s ======
Jan 26 09:55:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:41.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Jan 26 09:55:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:41 compute-0 sudo[173651]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:42 compute-0 ceph-mon[74456]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:55:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095542 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:55:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:42 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9858000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:55:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:43 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:43 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98880025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:43.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:44 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:55:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:45 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:45 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:45.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:45.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:46 compute-0 ceph-mon[74456]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:55:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:46 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98880025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:55:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:55:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 26 09:55:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:47.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:55:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:47 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:47 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:47.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:55:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:55:47 compute-0 ceph-mon[74456]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:55:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:48 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:48 compute-0 ceph-mon[74456]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 26 09:55:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:55:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:55:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:55:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:55:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:55:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:55:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:55:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 26 09:55:49 compute-0 sudo[202635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkmqrmqwlzykgbsmkjjmbsqolkaenvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421348.411451-963-211736242427678/AnsiballZ_systemd.py'
Jan 26 09:55:49 compute-0 sudo[202635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:49 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98880025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:49 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:55:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:49.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:55:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:55:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:55:49 compute-0 python3.9[202653]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:55:49 compute-0 systemd[1]: Reloading.
Jan 26 09:55:49 compute-0 systemd-rc-local-generator[202681]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:49 compute-0 systemd-sysv-generator[202686]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:55:49 compute-0 ceph-mon[74456]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 26 09:55:49 compute-0 sudo[202635]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:50 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:55:50 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:55:50 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.580s CPU time.
Jan 26 09:55:50 compute-0 systemd[1]: run-r96bcc3876da14715a72fc9d8dd80c8e4.service: Deactivated successfully.
Jan 26 09:55:50 compute-0 sudo[202842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpstzkictwxeozqzafdewmxmqvaxstwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421349.9433382-963-78509993399545/AnsiballZ_systemd.py'
Jan 26 09:55:50 compute-0 sudo[202842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:50 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:50 compute-0 python3.9[202844]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:55:50 compute-0 systemd[1]: Reloading.
Jan 26 09:55:50 compute-0 systemd-sysv-generator[202877]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:50 compute-0 systemd-rc-local-generator[202874]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 26 09:55:51 compute-0 sudo[202842]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:51 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:51 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f98880025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:51.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:51.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:51 compute-0 sudo[203034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtybmfqvalhuhksfumgbnyrpgeogdbuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421351.2203228-963-16555380335688/AnsiballZ_systemd.py'
Jan 26 09:55:51 compute-0 sudo[203034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:51 compute-0 python3.9[203036]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:55:51 compute-0 ceph-mon[74456]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 26 09:55:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:52 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9860002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:55:52 compute-0 systemd[1]: Reloading.
Jan 26 09:55:53 compute-0 systemd-sysv-generator[203069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:53 compute-0 systemd-rc-local-generator[203063]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:53 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9858002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:53 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:53.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:55:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:53.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:55:53 compute-0 sudo[203034]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:53 compute-0 sudo[203225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lblifntbjzdkemnywtelvfysenfjaakr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421353.5280378-963-193057979411536/AnsiballZ_systemd.py'
Jan 26 09:55:53 compute-0 sudo[203225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:54 compute-0 ceph-mon[74456]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:55:54 compute-0 python3.9[203227]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:55:54 compute-0 systemd[1]: Reloading.
Jan 26 09:55:54 compute-0 systemd-rc-local-generator[203256]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:54 compute-0 systemd-sysv-generator[203259]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:54 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:54 compute-0 sudo[203225]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:55:54.677 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:55:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:55:54.677 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:55:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:55:54.678 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:55:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:55:55 compute-0 sudo[203416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfygomesikmewhlyrweieusuezueeerh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421354.7554128-1050-33364465586392/AnsiballZ_systemd.py'
Jan 26 09:55:55 compute-0 sudo[203416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:55 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9860002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:55 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9858002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:55:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:55.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:55:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:55:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:55.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:55:55 compute-0 python3.9[203418]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:55:55 compute-0 systemd[1]: Reloading.
Jan 26 09:55:55 compute-0 systemd-sysv-generator[203453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:55 compute-0 systemd-rc-local-generator[203449]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:55 compute-0 sudo[203416]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:56 compute-0 ceph-mon[74456]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:55:56 compute-0 sudo[203617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oijwpdejqmithgyatkhwqbuescxbdlhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421355.8757992-1050-13016399632468/AnsiballZ_systemd.py'
Jan 26 09:55:56 compute-0 sudo[203617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:56 compute-0 podman[203580]: 2026-01-26 09:55:56.24384666 +0000 UTC m=+0.057565525 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 09:55:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:56 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:56 compute-0 python3.9[203623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:55:56 compute-0 systemd[1]: Reloading.
Jan 26 09:55:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:56] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:55:56] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:55:56 compute-0 systemd-sysv-generator[203663]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:56 compute-0 systemd-rc-local-generator[203659]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:56 compute-0 sudo[203617]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:57.028Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:55:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:55:57.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:55:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:57 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:57 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9860002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:55:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:57.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:55:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:55:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:57.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:55:57 compute-0 sudo[203817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phcpeyzligxecemxgofsexfkblhtwtaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421357.0504005-1050-266800048862021/AnsiballZ_systemd.py'
Jan 26 09:55:57 compute-0 sudo[203817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:57 compute-0 python3.9[203819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:55:57 compute-0 systemd[1]: Reloading.
Jan 26 09:55:57 compute-0 systemd-rc-local-generator[203845]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:55:57 compute-0 systemd-sysv-generator[203848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:55:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:55:58 compute-0 sudo[203817]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:58 compute-0 sudo[203865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:55:58 compute-0 sudo[203865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:55:58 compute-0 ceph-mon[74456]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:58 compute-0 sudo[203865]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:58 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9860002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:58 compute-0 sudo[204034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbkvelegyvoodfnwvpdohpkgzoclpcqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421358.1893804-1050-2114573022604/AnsiballZ_systemd.py'
Jan 26 09:55:58 compute-0 sudo[204034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:58 compute-0 python3.9[204036]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:55:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:55:58 compute-0 sudo[204034]: pam_unix(sudo:session): session closed for user root
Jan 26 09:55:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:59 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:55:59 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:55:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:55:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:55:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:55:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:55:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:55:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:55:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:55:59 compute-0 sudo[204189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhndsmcykrtucwenpjpvqgtsqwvicwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421359.1059477-1050-232384720984822/AnsiballZ_systemd.py'
Jan 26 09:55:59 compute-0 sudo[204189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:55:59 compute-0 python3.9[204191]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:55:59 compute-0 systemd[1]: Reloading.
Jan 26 09:56:00 compute-0 systemd-sysv-generator[204225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:56:00 compute-0 systemd-rc-local-generator[204220]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:56:00 compute-0 ceph-mon[74456]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:00 compute-0 sudo[204189]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:56:00 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9860002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:56:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:56:01 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9858003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:56:01 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:56:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:01.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:56:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:56:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:01.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:56:02 compute-0 sudo[204381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qygqpcsvacjytwjzxmyuohdhmtoqggps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421361.809633-1158-153991261600316/AnsiballZ_systemd.py'
Jan 26 09:56:02 compute-0 sudo[204381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:02 compute-0 ceph-mon[74456]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:56:02 compute-0 python3.9[204383]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 09:56:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:56:02 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f986c002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:02 compute-0 systemd[1]: Reloading.
Jan 26 09:56:02 compute-0 systemd-rc-local-generator[204415]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:56:02 compute-0 systemd-sysv-generator[204419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:56:02 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 26 09:56:02 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 26 09:56:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:02 compute-0 sudo[204381]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:03 compute-0 sshd-session[204424]: Connection closed by 117.50.196.2 port 41348
Jan 26 09:56:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:56:03 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9884000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:56:03 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9858004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999981s ======
Jan 26 09:56:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999981s
Jan 26 09:56:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:03 compute-0 sudo[204577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tziszzcxumwmbfxwcdgviheifuolptyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421363.2983391-1182-60483767257175/AnsiballZ_systemd.py'
Jan 26 09:56:03 compute-0 sudo[204577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:56:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:03 compute-0 python3.9[204579]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:04 compute-0 sudo[204577]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:04 compute-0 ceph-mon[74456]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:04 compute-0 sudo[204734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnzisvfqwjeymuxtnvqfrasrskmtjzrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421364.180256-1182-234957572865206/AnsiballZ_systemd.py'
Jan 26 09:56:04 compute-0 kernel: ganesha.nfsd[195393]: segfault at 50 ip 00007f990fc3a32e sp 00007f9878ff8210 error 4 in libntirpc.so.5.8[7f990fc1f000+2c000] likely on CPU 6 (core 0, socket 6)
Jan 26 09:56:04 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:56:04 compute-0 sudo[204734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[192838]: 26/01/2026 09:56:04 : epoch 69773a0f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9858004140 fd 38 proxy ignored for local
Jan 26 09:56:04 compute-0 systemd[1]: Started Process Core Dump (PID 204737/UID 0).
Jan 26 09:56:04 compute-0 python3.9[204736]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:56:04 compute-0 sudo[204734]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:56:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:05.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:56:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:56:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:56:05 compute-0 sudo[204891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prjbihmriobbzafglfubyvonqlfsdarx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421365.0844316-1182-129134424674331/AnsiballZ_systemd.py'
Jan 26 09:56:05 compute-0 sudo[204891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:05 compute-0 systemd-coredump[204738]: Process 192847 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 53:
                                                    #0  0x00007f990fc3a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:56:05 compute-0 systemd[1]: systemd-coredump@7-204737-0.service: Deactivated successfully.
Jan 26 09:56:05 compute-0 systemd[1]: systemd-coredump@7-204737-0.service: Consumed 1.032s CPU time.
Jan 26 09:56:05 compute-0 podman[204900]: 2026-01-26 09:56:05.694863513 +0000 UTC m=+0.051467268 container died 0041237f0bd72d492d807afb602e86f822e62edd08cccd96fef4d8275536f1db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:56:05 compute-0 python3.9[204893]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3318b966f25794610fc7f2c252c92b07c8f469a45108d8c5b900bf1cd72323a-merged.mount: Deactivated successfully.
Jan 26 09:56:05 compute-0 podman[204900]: 2026-01-26 09:56:05.739090966 +0000 UTC m=+0.095694691 container remove 0041237f0bd72d492d807afb602e86f822e62edd08cccd96fef4d8275536f1db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 09:56:05 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:56:05 compute-0 sshd-session[204894]: Invalid user oracle from 157.245.76.178 port 53814
Jan 26 09:56:05 compute-0 sudo[204891]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:05 compute-0 sshd-session[204894]: Connection closed by invalid user oracle 157.245.76.178 port 53814 [preauth]
Jan 26 09:56:05 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:56:05 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.622s CPU time.
Jan 26 09:56:06 compute-0 sudo[205096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ratogyscogwnvrfjcbsgkfqpvqhxphrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421365.9983976-1182-32371947940858/AnsiballZ_systemd.py'
Jan 26 09:56:06 compute-0 sudo[205096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:06 compute-0 ceph-mon[74456]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:56:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:56:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:06] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:56:06 compute-0 python3.9[205098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:06 compute-0 sudo[205096]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:07.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:56:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:56:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:56:07 compute-0 sudo[205252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzwnkbbllvhrvjnzkmuajgiwfmbrixgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421366.9191747-1182-270274431137743/AnsiballZ_systemd.py'
Jan 26 09:56:07 compute-0 sudo[205252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:07.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:07 compute-0 python3.9[205254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:07 compute-0 sudo[205252]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:08 compute-0 sudo[205431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btjglqrslokoqodzowjfewwmwhiidani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421367.8491905-1182-132381279894208/AnsiballZ_systemd.py'
Jan 26 09:56:08 compute-0 sudo[205431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:08 compute-0 podman[205370]: 2026-01-26 09:56:08.199001043 +0000 UTC m=+0.135310582 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 09:56:08 compute-0 ceph-mon[74456]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:08 compute-0 python3.9[205436]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:08 compute-0 sudo[205431]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:09 compute-0 sudo[205591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhkvzvrilwrwwndwdfriopinwicrysyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421368.7487695-1182-89378706627816/AnsiballZ_systemd.py'
Jan 26 09:56:09 compute-0 sudo[205591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:56:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:09.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:56:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:09 compute-0 python3.9[205593]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:09 compute-0 sudo[205591]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:10 compute-0 sudo[205746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noijtzuhxjpmjhikkfdofajlonipxxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421369.650626-1182-173368556791379/AnsiballZ_systemd.py'
Jan 26 09:56:10 compute-0 sudo[205746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:10 compute-0 python3.9[205748]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:10 compute-0 sudo[205746]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095610 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:56:10 compute-0 ceph-mon[74456]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:56:10 compute-0 sudo[205903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmjfunpgxshycmialclhwzyewjqqvpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421370.6157963-1182-202187471166182/AnsiballZ_systemd.py'
Jan 26 09:56:10 compute-0 sudo[205903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999982s ======
Jan 26 09:56:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:11.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999982s
Jan 26 09:56:11 compute-0 python3.9[205905]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:11 compute-0 sudo[205906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:56:11 compute-0 sudo[205906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:11 compute-0 sudo[205906]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:11 compute-0 sudo[205903]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:11 compute-0 sudo[205934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:56:11 compute-0 sudo[205934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:11 compute-0 sudo[206125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diixcgsxvbbntqlagkqbypnlhhulafmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421371.5247498-1182-32528469332773/AnsiballZ_systemd.py'
Jan 26 09:56:11 compute-0 sudo[206125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:11 compute-0 sudo[205934]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:12 compute-0 python3.9[206127]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:56:12 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:56:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:56:12 compute-0 sudo[206125]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:12 compute-0 ceph-mon[74456]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:56:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:56:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:56:12 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:56:12 compute-0 sudo[206296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbqdikqsalnsguuuygpdqdzunbriijpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421372.3809984-1182-150249423816662/AnsiballZ_systemd.py'
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:56:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:56:12 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:56:12 compute-0 sudo[206296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:12 compute-0 sudo[206299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:56:12 compute-0 sudo[206299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:12 compute-0 sudo[206299]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:12 compute-0 sudo[206324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:56:12 compute-0 sudo[206324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:13 compute-0 python3.9[206298]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:13.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:13 compute-0 sudo[206296]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:13.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:13 compute-0 podman[206446]: 2026-01-26 09:56:13.522441623 +0000 UTC m=+0.073553915 container create 8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:56:13 compute-0 podman[206446]: 2026-01-26 09:56:13.489068222 +0000 UTC m=+0.040180604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:56:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:56:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:56:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:56:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:56:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:56:13 compute-0 ceph-mon[74456]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:13 compute-0 systemd[1]: Started libpod-conmon-8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b.scope.
Jan 26 09:56:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:56:13 compute-0 podman[206446]: 2026-01-26 09:56:13.740727853 +0000 UTC m=+0.291840175 container init 8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 09:56:13 compute-0 podman[206446]: 2026-01-26 09:56:13.750326127 +0000 UTC m=+0.301438419 container start 8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 09:56:13 compute-0 jolly_davinci[206524]: 167 167
Jan 26 09:56:13 compute-0 podman[206446]: 2026-01-26 09:56:13.757657579 +0000 UTC m=+0.308769911 container attach 8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:56:13 compute-0 systemd[1]: libpod-8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b.scope: Deactivated successfully.
Jan 26 09:56:13 compute-0 podman[206446]: 2026-01-26 09:56:13.758355577 +0000 UTC m=+0.309467869 container died 8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-85abfea13c3d55b920dc85547d0359b2e47a6e14a7173f537acd49a1e55f275f-merged.mount: Deactivated successfully.
Jan 26 09:56:13 compute-0 sudo[206564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdkqonfnpjocqkagckdniyyxovmkkibw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421373.4037642-1182-181713302944535/AnsiballZ_systemd.py'
Jan 26 09:56:13 compute-0 sudo[206564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:13 compute-0 podman[206446]: 2026-01-26 09:56:13.855326924 +0000 UTC m=+0.406439236 container remove 8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 09:56:13 compute-0 systemd[1]: libpod-conmon-8cddd79cb82e17ce5af1050585eb52e6f918203927e7bdfd30c83eb803fb621b.scope: Deactivated successfully.
Jan 26 09:56:14 compute-0 podman[206586]: 2026-01-26 09:56:14.086403543 +0000 UTC m=+0.061138870 container create 2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 09:56:14 compute-0 python3.9[206576]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:14 compute-0 systemd[1]: Started libpod-conmon-2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d.scope.
Jan 26 09:56:14 compute-0 podman[206586]: 2026-01-26 09:56:14.056487032 +0000 UTC m=+0.031222439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:56:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a26f1beba97102245b7fdd23aa1fdd1be90ea6b4a9f6799df7e6c57a214e71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a26f1beba97102245b7fdd23aa1fdd1be90ea6b4a9f6799df7e6c57a214e71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a26f1beba97102245b7fdd23aa1fdd1be90ea6b4a9f6799df7e6c57a214e71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a26f1beba97102245b7fdd23aa1fdd1be90ea6b4a9f6799df7e6c57a214e71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59a26f1beba97102245b7fdd23aa1fdd1be90ea6b4a9f6799df7e6c57a214e71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:14 compute-0 podman[206586]: 2026-01-26 09:56:14.181449063 +0000 UTC m=+0.156184390 container init 2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:56:14 compute-0 podman[206586]: 2026-01-26 09:56:14.190441647 +0000 UTC m=+0.165176964 container start 2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_pare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 26 09:56:14 compute-0 podman[206586]: 2026-01-26 09:56:14.193909387 +0000 UTC m=+0.168644704 container attach 2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 09:56:14 compute-0 sudo[206564]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:14 compute-0 musing_pare[206603]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:56:14 compute-0 musing_pare[206603]: --> All data devices are unavailable
Jan 26 09:56:14 compute-0 systemd[1]: libpod-2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d.scope: Deactivated successfully.
Jan 26 09:56:14 compute-0 podman[206586]: 2026-01-26 09:56:14.578664097 +0000 UTC m=+0.553399424 container died 2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_pare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 09:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a26f1beba97102245b7fdd23aa1fdd1be90ea6b4a9f6799df7e6c57a214e71-merged.mount: Deactivated successfully.
Jan 26 09:56:14 compute-0 podman[206586]: 2026-01-26 09:56:14.757392725 +0000 UTC m=+0.732128042 container remove 2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:56:14 compute-0 systemd[1]: libpod-conmon-2ba5dd5b719f72f5e2d931ede43536d90de3de9f2e9265f5f5a0d9ca69a1ed3d.scope: Deactivated successfully.
Jan 26 09:56:14 compute-0 sudo[206785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouwuxuochlvraoblrlpyctvqxesvhzoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421374.3874247-1182-99016943381444/AnsiballZ_systemd.py'
Jan 26 09:56:14 compute-0 sudo[206785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:14 compute-0 sudo[206324]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:14 compute-0 sudo[206788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:56:14 compute-0 sudo[206788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:14 compute-0 sudo[206788]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:14 compute-0 sudo[206813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:56:14 compute-0 sudo[206813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:15 compute-0 python3.9[206787]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:15 compute-0 sudo[206785]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:15 compute-0 podman[206885]: 2026-01-26 09:56:15.285210372 +0000 UTC m=+0.037253204 container create b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:56:15 compute-0 systemd[1]: Started libpod-conmon-b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae.scope.
Jan 26 09:56:15 compute-0 podman[206885]: 2026-01-26 09:56:15.270824622 +0000 UTC m=+0.022867474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:56:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999981s ======
Jan 26 09:56:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:15.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999981s
Jan 26 09:56:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:56:15 compute-0 podman[206885]: 2026-01-26 09:56:15.382290667 +0000 UTC m=+0.134333519 container init b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:56:15 compute-0 podman[206885]: 2026-01-26 09:56:15.389111258 +0000 UTC m=+0.141154090 container start b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 09:56:15 compute-0 podman[206885]: 2026-01-26 09:56:15.392695307 +0000 UTC m=+0.144738159 container attach b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mirzakhani, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 09:56:15 compute-0 infallible_mirzakhani[206931]: 167 167
Jan 26 09:56:15 compute-0 systemd[1]: libpod-b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae.scope: Deactivated successfully.
Jan 26 09:56:15 compute-0 podman[206885]: 2026-01-26 09:56:15.395769473 +0000 UTC m=+0.147812305 container died b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 09:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-57cbccaf0326bb4e8643f812ea78ef40425c072356deb1d1b7b4472c8a62da28-merged.mount: Deactivated successfully.
Jan 26 09:56:15 compute-0 podman[206885]: 2026-01-26 09:56:15.439015673 +0000 UTC m=+0.191058495 container remove b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:56:15 compute-0 systemd[1]: libpod-conmon-b018cebb19a591c5e97576b73ad5b5d4e3cbe824c1bb6a1a07ba7590aa9c7cae.scope: Deactivated successfully.
Jan 26 09:56:15 compute-0 podman[207021]: 2026-01-26 09:56:15.617981466 +0000 UTC m=+0.047614874 container create 6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 09:56:15 compute-0 systemd[1]: Started libpod-conmon-6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0.scope.
Jan 26 09:56:15 compute-0 podman[207021]: 2026-01-26 09:56:15.594975265 +0000 UTC m=+0.024608693 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:56:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:56:15 compute-0 sudo[207091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msicaggxwrryeywxufxenmatfyhyzkie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421375.3588147-1182-138884917952027/AnsiballZ_systemd.py'
Jan 26 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446cf4fd3dee0aea8014f0eacd2cd1384a08fec28e29638f6d243aa436fe8567/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446cf4fd3dee0aea8014f0eacd2cd1384a08fec28e29638f6d243aa436fe8567/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446cf4fd3dee0aea8014f0eacd2cd1384a08fec28e29638f6d243aa436fe8567/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/446cf4fd3dee0aea8014f0eacd2cd1384a08fec28e29638f6d243aa436fe8567/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:15 compute-0 sudo[207091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:15 compute-0 podman[207021]: 2026-01-26 09:56:15.717079055 +0000 UTC m=+0.146712483 container init 6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_cartwright, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 09:56:15 compute-0 podman[207021]: 2026-01-26 09:56:15.724821841 +0000 UTC m=+0.154455249 container start 6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:56:15 compute-0 podman[207021]: 2026-01-26 09:56:15.728737743 +0000 UTC m=+0.158371151 container attach 6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:56:15 compute-0 ceph-mon[74456]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]: {
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:     "0": [
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:         {
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "devices": [
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "/dev/loop3"
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             ],
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "lv_name": "ceph_lv0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "lv_size": "21470642176",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "name": "ceph_lv0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "tags": {
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.cluster_name": "ceph",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.crush_device_class": "",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.encrypted": "0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.osd_id": "0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.type": "block",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.vdo": "0",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:                 "ceph.with_tpm": "0"
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             },
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "type": "block",
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:             "vg_name": "ceph_vg0"
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:         }
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]:     ]
Jan 26 09:56:15 compute-0 youthful_cartwright[207078]: }
Jan 26 09:56:16 compute-0 python3.9[207093]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 09:56:16 compute-0 podman[207021]: 2026-01-26 09:56:16.024672556 +0000 UTC m=+0.454306024 container died 6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_cartwright, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:56:16 compute-0 systemd[1]: libpod-6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0.scope: Deactivated successfully.
Jan 26 09:56:16 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 8.
Jan 26 09:56:16 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:56:16 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.622s CPU time.
Jan 26 09:56:16 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-446cf4fd3dee0aea8014f0eacd2cd1384a08fec28e29638f6d243aa436fe8567-merged.mount: Deactivated successfully.
Jan 26 09:56:16 compute-0 podman[207021]: 2026-01-26 09:56:16.089025239 +0000 UTC m=+0.518658647 container remove 6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:56:16 compute-0 systemd[1]: libpod-conmon-6547578a82155c5686ba707fb2205f145fa8d2f42bab68e612516413e48158b0.scope: Deactivated successfully.
Jan 26 09:56:16 compute-0 sudo[207091]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:16 compute-0 sudo[206813]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:16 compute-0 sudo[207139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:56:16 compute-0 sudo[207139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:16 compute-0 sudo[207139]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:16 compute-0 sudo[207211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:56:16 compute-0 sudo[207211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:16 compute-0 podman[207214]: 2026-01-26 09:56:16.300161844 +0000 UTC m=+0.051719404 container create 8a634fecc04d02b0778a3a5dad1920fd2e5933af6ab92dcb153bce4771eb91b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce29b027ad7ff34e7034ab5f980b2c4f88e5e50af360a76e688905f91bc9f11/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce29b027ad7ff34e7034ab5f980b2c4f88e5e50af360a76e688905f91bc9f11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce29b027ad7ff34e7034ab5f980b2c4f88e5e50af360a76e688905f91bc9f11/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:16 compute-0 podman[207214]: 2026-01-26 09:56:16.278495099 +0000 UTC m=+0.030052659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce29b027ad7ff34e7034ab5f980b2c4f88e5e50af360a76e688905f91bc9f11/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:16 compute-0 podman[207214]: 2026-01-26 09:56:16.401620622 +0000 UTC m=+0.153178212 container init 8a634fecc04d02b0778a3a5dad1920fd2e5933af6ab92dcb153bce4771eb91b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:56:16 compute-0 podman[207214]: 2026-01-26 09:56:16.407335413 +0000 UTC m=+0.158892973 container start 8a634fecc04d02b0778a3a5dad1920fd2e5933af6ab92dcb153bce4771eb91b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 09:56:16 compute-0 bash[207214]: 8a634fecc04d02b0778a3a5dad1920fd2e5933af6ab92dcb153bce4771eb91b4
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:56:16 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:16 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:56:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:16] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:56:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:16] "GET /metrics HTTP/1.1" 200 48263 "" "Prometheus/2.51.0"
Jan 26 09:56:16 compute-0 podman[207335]: 2026-01-26 09:56:16.687532439 +0000 UTC m=+0.038113310 container create 969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:56:16 compute-0 systemd[1]: Started libpod-conmon-969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6.scope.
Jan 26 09:56:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:56:16 compute-0 podman[207335]: 2026-01-26 09:56:16.763982522 +0000 UTC m=+0.114563423 container init 969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:56:16 compute-0 podman[207335]: 2026-01-26 09:56:16.669952774 +0000 UTC m=+0.020533675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:56:16 compute-0 podman[207335]: 2026-01-26 09:56:16.772594462 +0000 UTC m=+0.123175333 container start 969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:56:16 compute-0 podman[207335]: 2026-01-26 09:56:16.776480695 +0000 UTC m=+0.127061586 container attach 969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:56:16 compute-0 trusting_pasteur[207352]: 167 167
Jan 26 09:56:16 compute-0 systemd[1]: libpod-969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6.scope: Deactivated successfully.
Jan 26 09:56:16 compute-0 podman[207335]: 2026-01-26 09:56:16.777673654 +0000 UTC m=+0.128254525 container died 969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-338aa7574bf9645058a7e573d3555b678eb3ea177c0403430b7e9705dce412ba-merged.mount: Deactivated successfully.
Jan 26 09:56:16 compute-0 podman[207335]: 2026-01-26 09:56:16.815180783 +0000 UTC m=+0.165761654 container remove 969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_pasteur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 26 09:56:16 compute-0 systemd[1]: libpod-conmon-969ad80dcc2f571f4b94dbf23f0683292160360adc3ac5356f1513a44bb5e8f6.scope: Deactivated successfully.
Jan 26 09:56:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:56:16 compute-0 podman[207376]: 2026-01-26 09:56:16.996346308 +0000 UTC m=+0.046039492 container create ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 09:56:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:17.031Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:56:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:17.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:56:17 compute-0 systemd[1]: Started libpod-conmon-ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583.scope.
Jan 26 09:56:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fedbab082cc15bd08cc25079b3745b25931884f92fe30219bf03797ac6f94d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fedbab082cc15bd08cc25079b3745b25931884f92fe30219bf03797ac6f94d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fedbab082cc15bd08cc25079b3745b25931884f92fe30219bf03797ac6f94d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fedbab082cc15bd08cc25079b3745b25931884f92fe30219bf03797ac6f94d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:56:17 compute-0 podman[207376]: 2026-01-26 09:56:16.977972797 +0000 UTC m=+0.027665991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:56:17 compute-0 podman[207376]: 2026-01-26 09:56:17.083560564 +0000 UTC m=+0.133253778 container init ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Jan 26 09:56:17 compute-0 podman[207376]: 2026-01-26 09:56:17.094418096 +0000 UTC m=+0.144111280 container start ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_payne, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:56:17 compute-0 podman[207376]: 2026-01-26 09:56:17.097702899 +0000 UTC m=+0.147396093 container attach ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_payne, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:56:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:17.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:17.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:17 compute-0 lvm[207466]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:56:17 compute-0 lvm[207466]: VG ceph_vg0 finished
Jan 26 09:56:17 compute-0 mystifying_payne[207392]: {}
Jan 26 09:56:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:17 compute-0 systemd[1]: libpod-ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583.scope: Deactivated successfully.
Jan 26 09:56:17 compute-0 systemd[1]: libpod-ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583.scope: Consumed 1.207s CPU time.
Jan 26 09:56:17 compute-0 podman[207376]: 2026-01-26 09:56:17.872011738 +0000 UTC m=+0.921704962 container died ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_payne, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fedbab082cc15bd08cc25079b3745b25931884f92fe30219bf03797ac6f94d9-merged.mount: Deactivated successfully.
Jan 26 09:56:17 compute-0 podman[207376]: 2026-01-26 09:56:17.926786457 +0000 UTC m=+0.976479631 container remove ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:56:17 compute-0 systemd[1]: libpod-conmon-ad72e7e033ecd38b2c9353b3cc78a17122db33b5cae39031e4b84b6262189583.scope: Deactivated successfully.
Jan 26 09:56:17 compute-0 ceph-mon[74456]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:56:17 compute-0 sudo[207211]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:56:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:56:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:18 compute-0 sudo[207481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:56:18 compute-0 sudo[207481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:18 compute-0 sudo[207481]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:18 compute-0 sudo[207506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:56:18 compute-0 sudo[207506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:18 compute-0 sudo[207506]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:56:18
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'volumes', '.rgw.root', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'images', 'vms']
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:56:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:56:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:56:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:56:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:56:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000999981s ======
Jan 26 09:56:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999981s
Jan 26 09:56:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:19.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:19 compute-0 sudo[207658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htzrpwmgwfictmaxtccfqqmxabwzhewg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421379.513375-1488-66113300365335/AnsiballZ_file.py'
Jan 26 09:56:19 compute-0 sudo[207658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:19 compute-0 python3.9[207660]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:56:19 compute-0 sudo[207658]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:20 compute-0 ceph-mon[74456]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:56:20 compute-0 sudo[207812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgsdmgauayvdkzejprsfdglmvkpcvijf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421380.1370769-1488-154347226407739/AnsiballZ_file.py'
Jan 26 09:56:20 compute-0 sudo[207812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:20 compute-0 python3.9[207814]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:56:20 compute-0 sudo[207812]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:56:21 compute-0 sudo[207964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzshbrtorlpzmoedbmsmpzvhtizjgobt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421380.8422909-1488-222486170070817/AnsiballZ_file.py'
Jan 26 09:56:21 compute-0 sudo[207964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:21 compute-0 python3.9[207966]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:56:21 compute-0 sudo[207964]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:21.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:21 compute-0 sudo[208116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiwbllowzmkybebroouhwyibiogqoxls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421381.4704425-1488-160966954851039/AnsiballZ_file.py'
Jan 26 09:56:21 compute-0 sudo[208116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:21 compute-0 python3.9[208118]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:56:21 compute-0 sudo[208116]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:22 compute-0 ceph-mon[74456]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:56:22 compute-0 sudo[208269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnqjviiqxpnljsfrksoeqgxxcxyyufgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421382.1294787-1488-226290286228567/AnsiballZ_file.py'
Jan 26 09:56:22 compute-0 sudo[208269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:22 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:56:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:22 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:56:22 compute-0 python3.9[208272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:56:22 compute-0 sudo[208269]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:56:22 compute-0 sudo[208422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zavrcoykycdrucvdjnhpayguvjphguek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421382.6969101-1488-137130228836077/AnsiballZ_file.py'
Jan 26 09:56:22 compute-0 sudo[208422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.076073) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421383076113, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4203, "num_deletes": 502, "total_data_size": 8621810, "memory_usage": 8758560, "flush_reason": "Manual Compaction"}
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 26 09:56:23 compute-0 python3.9[208424]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421383125190, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8366266, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13183, "largest_seqno": 17385, "table_properties": {"data_size": 8348437, "index_size": 12083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36561, "raw_average_key_size": 19, "raw_value_size": 8311857, "raw_average_value_size": 4475, "num_data_blocks": 528, "num_entries": 1857, "num_filter_entries": 1857, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420938, "oldest_key_time": 1769420938, "file_creation_time": 1769421383, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 49179 microseconds, and 12835 cpu microseconds.
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.125256) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8366266 bytes OK
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.125275) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.126502) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.126514) EVENT_LOG_v1 {"time_micros": 1769421383126511, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.126531) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8605003, prev total WAL file size 8605003, number of live WAL files 2.
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.128141) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8170KB)], [32(12MB)]
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421383128231, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 21244737, "oldest_snapshot_seqno": -1}
Jan 26 09:56:23 compute-0 sudo[208422]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:23.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5064 keys, 15580052 bytes, temperature: kUnknown
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421383237092, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15580052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15541209, "index_size": 25101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 126598, "raw_average_key_size": 24, "raw_value_size": 15444508, "raw_average_value_size": 3049, "num_data_blocks": 1057, "num_entries": 5064, "num_filter_entries": 5064, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769421383, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.237371) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15580052 bytes
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.238663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 143.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.3 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6086, records dropped: 1022 output_compression: NoCompression
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.238678) EVENT_LOG_v1 {"time_micros": 1769421383238671, "job": 14, "event": "compaction_finished", "compaction_time_micros": 108985, "compaction_time_cpu_micros": 33382, "output_level": 6, "num_output_files": 1, "total_output_size": 15580052, "num_input_records": 6086, "num_output_records": 5064, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421383240242, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421383242727, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.128025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.242825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.242829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.242831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.242832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:23 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:23.242834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:23.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:24 compute-0 ceph-mon[74456]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:56:24 compute-0 python3.9[208574]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:56:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:25 compute-0 sudo[208726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwdklbcwlzzbdvrydcqcirijwkjupghe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421384.8077226-1641-111179526771104/AnsiballZ_stat.py'
Jan 26 09:56:25 compute-0 sudo[208726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:25 compute-0 python3.9[208728]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:25 compute-0 sudo[208726]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:25 compute-0 sudo[208851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxrregzkhhdbdiinfyixbakkwyawalrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421384.8077226-1641-111179526771104/AnsiballZ_copy.py'
Jan 26 09:56:25 compute-0 sudo[208851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:26 compute-0 ceph-mon[74456]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:26 compute-0 python3.9[208853]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421384.8077226-1641-111179526771104/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:26 compute-0 sudo[208851]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:26 compute-0 sudo[209018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyavjkrjotspotjhdmmgfacuprdcgwko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421386.3407524-1641-157893632091787/AnsiballZ_stat.py'
Jan 26 09:56:26 compute-0 sudo[209018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:26 compute-0 podman[208979]: 2026-01-26 09:56:26.620747536 +0000 UTC m=+0.052797558 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 09:56:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:26] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 26 09:56:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:26] "GET /metrics HTTP/1.1" 200 48274 "" "Prometheus/2.51.0"
Jan 26 09:56:26 compute-0 python3.9[209026]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:26 compute-0 sudo[209018]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:27.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:56:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:27.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:56:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:27.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:27 compute-0 sudo[209149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvdrzgnjeqcmyomnlhjqhciyrqughzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421386.3407524-1641-157893632091787/AnsiballZ_copy.py'
Jan 26 09:56:27 compute-0 sudo[209149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:27.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:27 compute-0 python3.9[209151]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421386.3407524-1641-157893632091787/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:27 compute-0 sudo[209149]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.866296) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421387866336, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 284, "num_deletes": 250, "total_data_size": 82118, "memory_usage": 87200, "flush_reason": "Manual Compaction"}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421387868673, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 81538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17386, "largest_seqno": 17669, "table_properties": {"data_size": 79610, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5188, "raw_average_key_size": 19, "raw_value_size": 75849, "raw_average_value_size": 281, "num_data_blocks": 7, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769421384, "oldest_key_time": 1769421384, "file_creation_time": 1769421387, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2573 microseconds, and 1152 cpu microseconds.
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.868867) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 81538 bytes OK
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.868948) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.870240) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.870264) EVENT_LOG_v1 {"time_micros": 1769421387870258, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.870282) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 80012, prev total WAL file size 80012, number of live WAL files 2.
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.871150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(79KB)], [35(14MB)]
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421387871297, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 15661590, "oldest_snapshot_seqno": -1}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4825 keys, 11601706 bytes, temperature: kUnknown
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421387950524, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 11601706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11568992, "index_size": 19548, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 122066, "raw_average_key_size": 25, "raw_value_size": 11480981, "raw_average_value_size": 2379, "num_data_blocks": 814, "num_entries": 4825, "num_filter_entries": 4825, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769421387, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.951004) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11601706 bytes
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.953616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.0 rd, 145.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.9 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(334.4) write-amplify(142.3) OK, records in: 5333, records dropped: 508 output_compression: NoCompression
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.953642) EVENT_LOG_v1 {"time_micros": 1769421387953630, "job": 16, "event": "compaction_finished", "compaction_time_micros": 79517, "compaction_time_cpu_micros": 25081, "output_level": 6, "num_output_files": 1, "total_output_size": 11601706, "num_input_records": 5333, "num_output_records": 4825, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421387953835, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421387957689, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.871068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.957774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.957780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.957782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.957783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:27 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:56:27.957785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:56:27 compute-0 sudo[209301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emwwlhaianfgxswodyfixafamgtvefuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421387.692107-1641-161415505808073/AnsiballZ_stat.py'
Jan 26 09:56:27 compute-0 sudo[209301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:28 compute-0 ceph-mon[74456]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:28 compute-0 python3.9[209303]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:28 compute-0 sudo[209301]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:56:28 compute-0 sudo[209428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyoekiqzcqxgueoimsijhtedhgzkhhhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421387.692107-1641-161415505808073/AnsiballZ_copy.py'
Jan 26 09:56:28 compute-0 sudo[209428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:56:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:28 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:56:28 compute-0 python3.9[209436]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421387.692107-1641-161415505808073/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:28 compute-0 sudo[209428]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:29 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:29 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:29 compute-0 sudo[209595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejxleyhqmdyijnfcponjptahaowgacxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421388.9562652-1641-118220397305766/AnsiballZ_stat.py'
Jan 26 09:56:29 compute-0 sudo[209595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:29.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:29 compute-0 python3.9[209597]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:29 compute-0 sudo[209595]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:29 compute-0 sudo[209720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aryzhjnowbmovoajpzcsdbjughtwiaae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421388.9562652-1641-118220397305766/AnsiballZ_copy.py'
Jan 26 09:56:29 compute-0 sudo[209720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:29 compute-0 python3.9[209722]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421388.9562652-1641-118220397305766/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:29 compute-0 sudo[209720]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:30 compute-0 ceph-mon[74456]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:30 compute-0 sudo[209874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbdlmjksluxfmldzkswntipnixclrqqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421390.145579-1641-21451075809757/AnsiballZ_stat.py'
Jan 26 09:56:30 compute-0 sudo[209874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:30 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:30 compute-0 python3.9[209876]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:30 compute-0 sudo[209874]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:56:31 compute-0 sudo[209999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dndlzrlmnjyjivijfdjtyxvevyvwgraf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421390.145579-1641-21451075809757/AnsiballZ_copy.py'
Jan 26 09:56:31 compute-0 sudo[209999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:31 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:31 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe364000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 09:56:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 09:56:31 compute-0 python3.9[210001]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421390.145579-1641-21451075809757/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:31 compute-0 sudo[209999]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:31.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:31 compute-0 sudo[210151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avsswlwqdvogujpylkzxhfqmaaekcjks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421391.3603966-1641-193546514506871/AnsiballZ_stat.py'
Jan 26 09:56:31 compute-0 sudo[210151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:31 compute-0 python3.9[210153]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:31 compute-0 sudo[210151]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:32 compute-0 sudo[210276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnhfphdortuupfmodegmndgednebzypo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421391.3603966-1641-193546514506871/AnsiballZ_copy.py'
Jan 26 09:56:32 compute-0 sudo[210276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:32 compute-0 ceph-mon[74456]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:56:32 compute-0 python3.9[210278]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421391.3603966-1641-193546514506871/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:32 compute-0 sudo[210276]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095632 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:56:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:32 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:32 compute-0 sudo[210430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwkyubyejussqoqlsxzqvzkersdrduux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421392.549111-1641-264727165256165/AnsiballZ_stat.py'
Jan 26 09:56:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:32 compute-0 sudo[210430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:56:33 compute-0 python3.9[210432]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:33 compute-0 sudo[210430]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:33 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:33 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:33.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:33 compute-0 sudo[210553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptkqwugpcyzdmvgbafhfryiazaqmpybs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421392.549111-1641-264727165256165/AnsiballZ_copy.py'
Jan 26 09:56:33 compute-0 sudo[210553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:33 compute-0 python3.9[210555]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421392.549111-1641-264727165256165/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:56:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:33 compute-0 sudo[210553]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:34 compute-0 ceph-mon[74456]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:56:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:34 compute-0 sudo[210705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlltiqsudrpgnmdnagduxrpivzibnnew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421393.9390578-1641-90470699981206/AnsiballZ_stat.py'
Jan 26 09:56:34 compute-0 sudo[210705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:34 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:34 compute-0 python3.9[210707]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:34 compute-0 sudo[210705]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:56:34 compute-0 sudo[210832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnrfkmmlzbhserpptmwxfhrkccuufmmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421393.9390578-1641-90470699981206/AnsiballZ_copy.py'
Jan 26 09:56:34 compute-0 sudo[210832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:35 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:35 compute-0 python3.9[210834]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769421393.9390578-1641-90470699981206/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:35 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:35 compute-0 sudo[210832]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:35.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:36 compute-0 ceph-mon[74456]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:56:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:36 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:56:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:56:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:56:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:37.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:56:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:37 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095637 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:56:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:37 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:37 compute-0 sudo[210986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pscezybspmrrmwieinrywwlaflfbrpxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421396.8540518-1980-124962562667764/AnsiballZ_command.py'
Jan 26 09:56:37 compute-0 sudo[210986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:37 compute-0 python3.9[210988]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 26 09:56:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:37.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:37 compute-0 sudo[210986]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095637 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:56:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:38 compute-0 sudo[211139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdmngywoilqcyjcujgziqrlwvpmmsyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421397.8240502-2007-208882439921208/AnsiballZ_file.py'
Jan 26 09:56:38 compute-0 sudo[211139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:38 compute-0 ceph-mon[74456]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:56:38 compute-0 python3.9[211141]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:38 compute-0 sudo[211139]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:38 compute-0 sudo[211142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:56:38 compute-0 sudo[211142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:38 compute-0 sudo[211142]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:38 compute-0 podman[211167]: 2026-01-26 09:56:38.456952791 +0000 UTC m=+0.096454481 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 09:56:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:38 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c0089f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:38 compute-0 sudo[211346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juqfgkvvtizxnqccchgmsbuxwfcjfrqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421398.486129-2007-64467950877415/AnsiballZ_file.py'
Jan 26 09:56:38 compute-0 sudo[211346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:56:38 compute-0 python3.9[211348]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:38 compute-0 sudo[211346]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:39 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:39 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:39.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:39 compute-0 sudo[211498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnvcxaflyapjqijdkjrfkuigdmrjqgqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421399.238954-2007-134611832633074/AnsiballZ_file.py'
Jan 26 09:56:39 compute-0 sudo[211498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:39 compute-0 python3.9[211500]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:39 compute-0 sudo[211498]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:40 compute-0 sudo[211650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtmgbkkrbaxtbeejujnrfzlbjsvfytv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421399.9279988-2007-138308009622866/AnsiballZ_file.py'
Jan 26 09:56:40 compute-0 sudo[211650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:40 compute-0 ceph-mon[74456]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:56:40 compute-0 python3.9[211652]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:40 compute-0 sudo[211650]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:40 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:40 compute-0 sudo[211804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejzjtepwupzxhlugszwvotxwdyubzuww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421400.5858428-2007-216988138389474/AnsiballZ_file.py'
Jan 26 09:56:40 compute-0 sudo[211804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:56:41 compute-0 python3.9[211806]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:41 compute-0 sudo[211804]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:41 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c009310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:41 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:41.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:41.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:41 compute-0 sudo[211956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msfjrglaoteilvaaacylezbycitynvms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421401.277407-2007-42208521293247/AnsiballZ_file.py'
Jan 26 09:56:41 compute-0 sudo[211956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:41 compute-0 python3.9[211958]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:41 compute-0 sudo[211956]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:42 compute-0 sudo[212108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nulczacbxxhqaezarbgwyisnzysgghxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421401.9704704-2007-179585190288652/AnsiballZ_file.py'
Jan 26 09:56:42 compute-0 sudo[212108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:42 compute-0 ceph-mon[74456]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:56:42 compute-0 python3.9[212110]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:42 compute-0 sudo[212108]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:42 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:56:43 compute-0 sudo[212262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nblklwkcziqvwizaxabathfixwvqagal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421402.6654267-2007-245593657069955/AnsiballZ_file.py'
Jan 26 09:56:43 compute-0 sudo[212262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:43 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:43 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c009310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:43 compute-0 python3.9[212264]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:43 compute-0 sudo[212262]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:43 compute-0 sudo[212414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dollbyvedfcfgtylleqjehoftjejjkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421403.440334-2007-200659759315097/AnsiballZ_file.py'
Jan 26 09:56:43 compute-0 sudo[212414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:43 compute-0 python3.9[212416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:43 compute-0 sudo[212414]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:44 compute-0 ceph-mon[74456]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 09:56:44 compute-0 sudo[212567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqvmfsckwslmddtvgklxxulrtwhataqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421404.1140761-2007-10201413049618/AnsiballZ_file.py'
Jan 26 09:56:44 compute-0 sudo[212567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:44 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:44 compute-0 python3.9[212570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:44 compute-0 sudo[212567]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:56:45 compute-0 sudo[212720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfliwzbxepztdwkealrkrptwwegpfljg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421404.7500553-2007-126586463540352/AnsiballZ_file.py'
Jan 26 09:56:45 compute-0 sudo[212720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:45 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:45 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:45.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:45 compute-0 python3.9[212722]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:45 compute-0 sudo[212720]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:45.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:45 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:56:45 compute-0 sudo[212872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceryrqqdqnivhgzehdlaqmhkzgdajxty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421405.4285169-2007-88779933524334/AnsiballZ_file.py'
Jan 26 09:56:45 compute-0 sudo[212872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:45 compute-0 python3.9[212874]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:46 compute-0 sudo[212872]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:46 compute-0 ceph-mon[74456]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:56:46 compute-0 sudo[213026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwicmvzqmmanmcizverbjnfxjmqfjren ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421406.1693115-2007-160495252769818/AnsiballZ_file.py'
Jan 26 09:56:46 compute-0 sudo[213026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:46 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c009310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:46 compute-0 python3.9[213028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:56:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:56:46 compute-0 sudo[213026]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:56:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:47.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:56:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:47 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:47 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:47 compute-0 sudo[213178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twyzlzlkctytixmngiyhpkqxhlwyklhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421406.897459-2007-94020064724288/AnsiballZ_file.py'
Jan 26 09:56:47 compute-0 sudo[213178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:47.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:47 compute-0 python3.9[213180]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:47 compute-0 sudo[213178]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:48 compute-0 ceph-mon[74456]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:56:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:48 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3600032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:48 compute-0 sshd-session[213205]: Invalid user oracle from 157.245.76.178 port 49076
Jan 26 09:56:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:48 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:56:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:48 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:56:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:48 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:56:48 compute-0 sshd-session[213205]: Connection closed by invalid user oracle 157.245.76.178 port 49076 [preauth]
Jan 26 09:56:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:56:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:56:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:56:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:56:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:56:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:56:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:56:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:56:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:49 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:49 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:49 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:56:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:56:50 compute-0 sudo[213334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twuwdweblcjajhqibpfgagsjlqdqytxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421409.5604842-2304-172693989649695/AnsiballZ_stat.py'
Jan 26 09:56:50 compute-0 sudo[213334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:50 compute-0 python3.9[213336]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:50 compute-0 sudo[213334]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:50 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:50 compute-0 ceph-mon[74456]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:56:50 compute-0 sudo[213459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sywzhklhmgnqcanwklblpqjtjltrvvwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421409.5604842-2304-172693989649695/AnsiballZ_copy.py'
Jan 26 09:56:50 compute-0 sudo[213459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:50 compute-0 python3.9[213461]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421409.5604842-2304-172693989649695/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:50 compute-0 sudo[213459]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:51 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3600032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:51 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:51.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:51 compute-0 sudo[213611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfiimjicykfamicrvksyyeuhxrfrpxae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421411.010982-2304-79091349656282/AnsiballZ_stat.py'
Jan 26 09:56:51 compute-0 sudo[213611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:51 compute-0 python3.9[213613]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:51 compute-0 sudo[213611]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:51 compute-0 ceph-mon[74456]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:51 compute-0 sudo[213734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjjajntlpflcnmebrkqxktzeyuotebnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421411.010982-2304-79091349656282/AnsiballZ_copy.py'
Jan 26 09:56:51 compute-0 sudo[213734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:52 compute-0 python3.9[213736]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421411.010982-2304-79091349656282/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:52 compute-0 sudo[213734]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:52 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:56:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:52 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:52 compute-0 sudo[213888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfrlnficwnitojooonxudowftdrehlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421412.2892864-2304-11012405173979/AnsiballZ_stat.py'
Jan 26 09:56:52 compute-0 sudo[213888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:52 compute-0 python3.9[213890]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:52 compute-0 sudo[213888]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:53 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:53 compute-0 sudo[214011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaefrdtfwtcdtcpfwfdxwbtpxblmmckq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421412.2892864-2304-11012405173979/AnsiballZ_copy.py'
Jan 26 09:56:53 compute-0 sudo[214011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:53 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:53 compute-0 python3.9[214013]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421412.2892864-2304-11012405173979/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:53 compute-0 sudo[214011]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:53 compute-0 sudo[214163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbkjxazicqzwtpkfyjuxtjpnsqfyfdka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421413.5080569-2304-193740576259529/AnsiballZ_stat.py'
Jan 26 09:56:53 compute-0 sudo[214163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:53 compute-0 ceph-mon[74456]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:54 compute-0 python3.9[214165]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:54 compute-0 sudo[214163]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:54 compute-0 sudo[214288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rawxjfmklruoyyzroaqjqcicysnazevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421413.5080569-2304-193740576259529/AnsiballZ_copy.py'
Jan 26 09:56:54 compute-0 sudo[214288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:54 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:54 compute-0 python3.9[214290]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421413.5080569-2304-193740576259529/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:54 compute-0 sudo[214288]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:56:54.678 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:56:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:56:54.679 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:56:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:56:54.679 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:56:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 26 09:56:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:55 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:55 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:55 compute-0 sudo[214440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrwfcvylptpimeowcwevnmcsctozxfwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421414.836469-2304-231854898432634/AnsiballZ_stat.py'
Jan 26 09:56:55 compute-0 sudo[214440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:55 compute-0 python3.9[214442]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:55 compute-0 sudo[214440]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:55.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:55 compute-0 sudo[214563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiyqvyfveikkitbopuevlmfnsjtqrxst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421414.836469-2304-231854898432634/AnsiballZ_copy.py'
Jan 26 09:56:55 compute-0 sudo[214563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:55 compute-0 python3.9[214565]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421414.836469-2304-231854898432634/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:55 compute-0 ceph-mon[74456]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 26 09:56:55 compute-0 sudo[214563]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:56 compute-0 sudo[214717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzxxovizczidwdbuszdwfdvnbjcpfql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421416.1130836-2304-9310711906544/AnsiballZ_stat.py'
Jan 26 09:56:56 compute-0 sudo[214717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:56 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:56 compute-0 python3.9[214719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:56 compute-0 sudo[214717]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:56] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 26 09:56:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:56:56] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Jan 26 09:56:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:56 compute-0 sudo[214850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzhtuekzwnfdxsscvczopxlktipzniul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421416.1130836-2304-9310711906544/AnsiballZ_copy.py'
Jan 26 09:56:56 compute-0 sudo[214850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:57 compute-0 podman[214814]: 2026-01-26 09:56:57.020799176 +0000 UTC m=+0.074470210 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 26 09:56:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:57.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:56:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:57.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:56:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:56:57.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:56:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:57 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095657 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:56:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:57 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:57 compute-0 python3.9[214854]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421416.1130836-2304-9310711906544/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:56:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:57.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:56:57 compute-0 sudo[214850]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095657 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:56:57 compute-0 sudo[215011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxcjckbdembzgtvwgxpeifgjiqkcmegr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421417.3377616-2304-186710649261727/AnsiballZ_stat.py'
Jan 26 09:56:57 compute-0 sudo[215011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:57 compute-0 python3.9[215013]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:57 compute-0 sudo[215011]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:56:57 compute-0 ceph-mon[74456]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:58 compute-0 sudo[215134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byovpbuyzjwyppzoubcomofoyrmwhkle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421417.3377616-2304-186710649261727/AnsiballZ_copy.py'
Jan 26 09:56:58 compute-0 sudo[215134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:58 compute-0 python3.9[215136]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421417.3377616-2304-186710649261727/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:58 compute-0 sudo[215134]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:58 compute-0 sudo[215139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:56:58 compute-0 sudo[215139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:56:58 compute-0 sudo[215139]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:58 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:56:58 compute-0 sudo[215313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppixtzwkelbjzutpzfqslibrzdubgtok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421418.5503-2304-230488329652381/AnsiballZ_stat.py'
Jan 26 09:56:58 compute-0 sudo[215313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:59 compute-0 python3.9[215315]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:56:59 compute-0 sudo[215313]: pam_unix(sudo:session): session closed for user root
Jan 26 09:56:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:59 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:56:59 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe35c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:56:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:56:59.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:56:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:56:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:56:59.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:56:59 compute-0 sudo[215436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjkadelpfsxlejimjjwxtrlckzpyropd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421418.5503-2304-230488329652381/AnsiballZ_copy.py'
Jan 26 09:56:59 compute-0 sudo[215436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:56:59 compute-0 python3.9[215438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421418.5503-2304-230488329652381/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:56:59 compute-0 sudo[215436]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:00 compute-0 ceph-mon[74456]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:57:00 compute-0 sudo[215588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvrxuppehzxyhnvsfiiihbjijtutwqcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421419.8316743-2304-44537767065042/AnsiballZ_stat.py'
Jan 26 09:57:00 compute-0 sudo[215588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:00 compute-0 python3.9[215590]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:00 compute-0 sudo[215588]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:00 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:57:00 compute-0 sudo[215714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmdlyvrkhxsfulwtxwxdgkvikalcypdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421419.8316743-2304-44537767065042/AnsiballZ_copy.py'
Jan 26 09:57:00 compute-0 sudo[215714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:01 compute-0 python3.9[215716]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421419.8316743-2304-44537767065042/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:01 compute-0 sudo[215714]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:01 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:01 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:01.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:01 compute-0 sudo[215867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgtvvuduxxxfviqgvaevhljeuumrdeja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421421.3164818-2304-166762805624313/AnsiballZ_stat.py'
Jan 26 09:57:01 compute-0 sudo[215867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:01 compute-0 anacron[2726]: Job `cron.monthly' started
Jan 26 09:57:01 compute-0 anacron[2726]: Job `cron.monthly' terminated
Jan 26 09:57:01 compute-0 anacron[2726]: Normal exit (3 jobs run)
Jan 26 09:57:01 compute-0 python3.9[215869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:01 compute-0 sudo[215867]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:02 compute-0 ceph-mon[74456]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 09:57:02 compute-0 sudo[215992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfchfcemgjovixwocpvsxfrakuirflud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421421.3164818-2304-166762805624313/AnsiballZ_copy.py'
Jan 26 09:57:02 compute-0 sudo[215992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:02 compute-0 python3.9[215994]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421421.3164818-2304-166762805624313/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:02 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe354000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:02 compute-0 sudo[215992]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:57:02 compute-0 sudo[216146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvdswtjmxqnyytagqcypxpuprhppvanl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421422.6883097-2304-141298805152678/AnsiballZ_stat.py'
Jan 26 09:57:02 compute-0 sudo[216146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:03 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:03 compute-0 python3.9[216148]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:03 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:03 compute-0 sudo[216146]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:57:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:57:03 compute-0 sudo[216269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdethayufklnrswcnqncuzftcekewdyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421422.6883097-2304-141298805152678/AnsiballZ_copy.py'
Jan 26 09:57:03 compute-0 sudo[216269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:57:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:03 compute-0 python3.9[216271]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421422.6883097-2304-141298805152678/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:03 compute-0 sudo[216269]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:04 compute-0 ceph-mon[74456]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:57:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:04 compute-0 sudo[216423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jldenrawsvxnipxrtwbpacwaxhkscxjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421424.018302-2304-270466018269515/AnsiballZ_stat.py'
Jan 26 09:57:04 compute-0 sudo[216423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:04 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:04 compute-0 python3.9[216425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:04 compute-0 sudo[216423]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:57:05 compute-0 sudo[216546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdalqnhuhlcucmipfgicwocmvabnjxps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421424.018302-2304-270466018269515/AnsiballZ_copy.py'
Jan 26 09:57:05 compute-0 sudo[216546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:05 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3540016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:05 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe3740021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:05 compute-0 python3.9[216548]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421424.018302-2304-270466018269515/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:05 compute-0 sudo[216546]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:05.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:05 compute-0 sudo[216698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfwzwybpuqxehctatjjttxlydizpjhus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421425.4264305-2304-158550277329672/AnsiballZ_stat.py'
Jan 26 09:57:05 compute-0 sudo[216698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:05 compute-0 python3.9[216700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:05 compute-0 sudo[216698]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:06 compute-0 ceph-mon[74456]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Jan 26 09:57:06 compute-0 sudo[216821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrbehjkytsndbxxslvrgriczmlgsozkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421425.4264305-2304-158550277329672/AnsiballZ_copy.py'
Jan 26 09:57:06 compute-0 sudo[216821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:06 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe360004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:06 compute-0 python3.9[216823]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421425.4264305-2304-158550277329672/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:06 compute-0 sudo[216821]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:57:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:57:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:07 compute-0 sudo[216975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grtxiffzesxitngrqwvftzcpajktrxjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421426.694484-2304-209545986636090/AnsiballZ_stat.py'
Jan 26 09:57:07 compute-0 sudo[216975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:07.038Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:57:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:07.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:57:07 compute-0 kernel: ganesha.nfsd[209430]: segfault at 50 ip 00007fe40698b32e sp 00007fe3b27fb210 error 4 in libntirpc.so.5.8[7fe406970000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 26 09:57:07 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:57:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[207250]: 26/01/2026 09:57:07 : epoch 69773a40 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe37c00a410 fd 48 proxy ignored for local
Jan 26 09:57:07 compute-0 systemd[1]: Started Process Core Dump (PID 216978/UID 0).
Jan 26 09:57:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000008s ======
Jan 26 09:57:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:07.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Jan 26 09:57:07 compute-0 python3.9[216977]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:07 compute-0 sudo[216975]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:07.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:07 compute-0 sudo[217100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfmqounjkvzrvjndndtqrmkhakolcsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421426.694484-2304-209545986636090/AnsiballZ_copy.py'
Jan 26 09:57:07 compute-0 sudo[217100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:07 compute-0 python3.9[217102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421426.694484-2304-209545986636090/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:07 compute-0 sudo[217100]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:08 compute-0 ceph-mon[74456]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:08 compute-0 systemd-coredump[216979]: Process 207256 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007fe40698b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:57:08 compute-0 systemd[1]: systemd-coredump@8-216978-0.service: Deactivated successfully.
Jan 26 09:57:08 compute-0 podman[217131]: 2026-01-26 09:57:08.334697577 +0000 UTC m=+0.028881191 container died 8a634fecc04d02b0778a3a5dad1920fd2e5933af6ab92dcb153bce4771eb91b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ce29b027ad7ff34e7034ab5f980b2c4f88e5e50af360a76e688905f91bc9f11-merged.mount: Deactivated successfully.
Jan 26 09:57:08 compute-0 podman[217131]: 2026-01-26 09:57:08.3777468 +0000 UTC m=+0.071930384 container remove 8a634fecc04d02b0778a3a5dad1920fd2e5933af6ab92dcb153bce4771eb91b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:57:08 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:57:08 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:57:08 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.234s CPU time.
Jan 26 09:57:08 compute-0 podman[217176]: 2026-01-26 09:57:08.643219873 +0000 UTC m=+0.094239221 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 09:57:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:09.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:09.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:10 compute-0 python3.9[217329]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:57:10 compute-0 ceph-mon[74456]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:11.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:11.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:11 compute-0 sudo[217483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhlpspixqswuitdvaahtzuptihtnzslx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421431.0331273-2922-248318996276015/AnsiballZ_seboolean.py'
Jan 26 09:57:11 compute-0 sudo[217483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:11 compute-0 python3.9[217485]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 26 09:57:11 compute-0 ceph-mon[74456]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095712 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:57:12 compute-0 sudo[217483]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:13.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:13 compute-0 sudo[217642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcocslejtgrxrottceguklaogsttxdee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421433.2887754-2946-118318373722430/AnsiballZ_copy.py'
Jan 26 09:57:13 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 26 09:57:13 compute-0 sudo[217642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:13 compute-0 python3.9[217644]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:13 compute-0 sudo[217642]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:13 compute-0 ceph-mon[74456]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:14 compute-0 sudo[217794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boztmlzddcrxigzanoslpxgdjenjbdfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421433.9517684-2946-202936061137381/AnsiballZ_copy.py'
Jan 26 09:57:14 compute-0 sudo[217794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:14 compute-0 python3.9[217796]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:14 compute-0 sudo[217794]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:57:14 compute-0 sudo[217948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlfxblffqpgsgzhjecsedtvlvgssiwnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421434.632599-2946-21935033927306/AnsiballZ_copy.py'
Jan 26 09:57:14 compute-0 sudo[217948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:15 compute-0 python3.9[217950]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:15 compute-0 sudo[217948]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:15 compute-0 sudo[218100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtyntifjhjntwjldbagbwrotjqnwrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421435.4262183-2946-152295588497496/AnsiballZ_copy.py'
Jan 26 09:57:15 compute-0 sudo[218100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:15 compute-0 python3.9[218102]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:15 compute-0 sudo[218100]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:15 compute-0 ceph-mon[74456]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:57:16 compute-0 sudo[218254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysckidvqnejsvyhxyqfuurhmkognmmdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421436.10009-2946-46101925826715/AnsiballZ_copy.py'
Jan 26 09:57:16 compute-0 sudo[218254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:16 compute-0 python3.9[218256]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:57:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:57:16 compute-0 sudo[218254]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:57:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:57:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:17.040Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:57:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000008s ======
Jan 26 09:57:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:17.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Jan 26 09:57:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:17.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:17 compute-0 sudo[218406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqwrqvowsietuzekmpoviaxyepebysph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421437.3369613-3054-278034813614712/AnsiballZ_copy.py'
Jan 26 09:57:17 compute-0 sudo[218406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:17 compute-0 python3.9[218408]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:17 compute-0 sudo[218406]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:18 compute-0 ceph-mon[74456]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:57:18 compute-0 sudo[218528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:57:18 compute-0 sudo[218528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:18 compute-0 sudo[218528]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:18 compute-0 sudo[218590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhbzeeeifrgltjlkoubnibazozaakat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421438.0472212-3054-150769189498116/AnsiballZ_copy.py'
Jan 26 09:57:18 compute-0 sudo[218590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:18 compute-0 sudo[218582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 09:57:18 compute-0 sudo[218582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:18 compute-0 sudo[218613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:57:18 compute-0 sudo[218613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:18 compute-0 sudo[218613]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:18 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 9.
Jan 26 09:57:18 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:57:18 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.234s CPU time.
Jan 26 09:57:18 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:57:18 compute-0 python3.9[218610]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:18 compute-0 sudo[218590]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:57:18
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.nfs', '.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms']
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:57:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:57:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:57:18 compute-0 podman[218768]: 2026-01-26 09:57:18.801422693 +0000 UTC m=+0.054323443 container create 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac777dec9495ab96cfe54510ac1080d6a5df76d21dd5c638be20c8620f439a5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac777dec9495ab96cfe54510ac1080d6a5df76d21dd5c638be20c8620f439a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac777dec9495ab96cfe54510ac1080d6a5df76d21dd5c638be20c8620f439a5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac777dec9495ab96cfe54510ac1080d6a5df76d21dd5c638be20c8620f439a5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:18 compute-0 podman[218768]: 2026-01-26 09:57:18.780339845 +0000 UTC m=+0.033240615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:57:18 compute-0 podman[218768]: 2026-01-26 09:57:18.887233727 +0000 UTC m=+0.140134497 container init 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:57:18 compute-0 podman[218768]: 2026-01-26 09:57:18.901177538 +0000 UTC m=+0.154078318 container start 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:57:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:57:18 compute-0 bash[218768]: 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1
Jan 26 09:57:18 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:57:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:18 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:57:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:18 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:57:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:18 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:57:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:18 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:57:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:18 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:57:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:18 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:57:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:18 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:57:18 compute-0 podman[218876]: 2026-01-26 09:57:18.991448507 +0000 UTC m=+0.074133542 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:57:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:19 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:57:19 compute-0 sudo[218981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yezddjbhmatapukmjedtzhchwocpvyyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421438.7428749-3054-175797905329070/AnsiballZ_copy.py'
Jan 26 09:57:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:19 compute-0 sudo[218981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:19 compute-0 podman[218876]: 2026-01-26 09:57:19.083032416 +0000 UTC m=+0.165717461 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:57:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:19.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:19 compute-0 python3.9[218983]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:19 compute-0 sudo[218981]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:19 compute-0 podman[219157]: 2026-01-26 09:57:19.6084685 +0000 UTC m=+0.068121924 container exec 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:57:19 compute-0 podman[219157]: 2026-01-26 09:57:19.617015918 +0000 UTC m=+0.076669342 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:57:19 compute-0 sudo[219303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waqaahmqwdkzkakgtokvaqznhvfgrbpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421439.4627674-3054-34238007489610/AnsiballZ_copy.py'
Jan 26 09:57:19 compute-0 sudo[219303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:19 compute-0 podman[219322]: 2026-01-26 09:57:19.945625015 +0000 UTC m=+0.053015243 container exec 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 09:57:19 compute-0 podman[219322]: 2026-01-26 09:57:19.956511581 +0000 UTC m=+0.063901759 container exec_died 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 09:57:20 compute-0 python3.9[219309]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:20 compute-0 sudo[219303]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:20 compute-0 ceph-mon[74456]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:57:20 compute-0 podman[219413]: 2026-01-26 09:57:20.188977253 +0000 UTC m=+0.061391260 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:57:20 compute-0 podman[219413]: 2026-01-26 09:57:20.208863042 +0000 UTC m=+0.081276979 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 09:57:20 compute-0 podman[219556]: 2026-01-26 09:57:20.458028926 +0000 UTC m=+0.065351672 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., release=1793, io.openshift.expose-services=, description=keepalived for Ceph, version=2.2.4)
Jan 26 09:57:20 compute-0 podman[219556]: 2026-01-26 09:57:20.484638268 +0000 UTC m=+0.091960964 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, name=keepalived, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 26 09:57:20 compute-0 sudo[219645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qolxrqjgpatkbbraoouwybuhjcxoonxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421440.1872356-3054-263344767833922/AnsiballZ_copy.py'
Jan 26 09:57:20 compute-0 sudo[219645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:20 compute-0 podman[219675]: 2026-01-26 09:57:20.735331354 +0000 UTC m=+0.058292376 container exec c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:57:20 compute-0 podman[219675]: 2026-01-26 09:57:20.764515496 +0000 UTC m=+0.087476498 container exec_died c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:57:20 compute-0 python3.9[219656]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:20 compute-0 sudo[219645]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:57:21 compute-0 podman[219772]: 2026-01-26 09:57:21.003501359 +0000 UTC m=+0.077182306 container exec ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:57:21 compute-0 podman[219772]: 2026-01-26 09:57:21.180862651 +0000 UTC m=+0.254543598 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 09:57:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:21.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:21 compute-0 podman[219903]: 2026-01-26 09:57:21.6379115 +0000 UTC m=+0.072897591 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:57:21 compute-0 podman[219903]: 2026-01-26 09:57:21.674600223 +0000 UTC m=+0.109586214 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 09:57:21 compute-0 sudo[218582]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:57:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:57:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:21 compute-0 sudo[219998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:57:21 compute-0 sudo[219998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:21 compute-0 sudo[219998]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:21 compute-0 sudo[220096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxjjlvnopizeqpbrqtzqutvfdtrmrqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421441.5630257-3162-183106991636216/AnsiballZ_systemd.py'
Jan 26 09:57:21 compute-0 sudo[220096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:21 compute-0 sudo[220048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:57:21 compute-0 sudo[220048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:57:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:22 compute-0 python3.9[220099]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:57:22 compute-0 systemd[1]: Reloading.
Jan 26 09:57:22 compute-0 systemd-rc-local-generator[220140]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:57:22 compute-0 systemd-sysv-generator[220145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:57:22 compute-0 sudo[220048]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:57:22 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:57:22 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 26 09:57:22 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 26 09:57:22 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 26 09:57:22 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 26 09:57:22 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 26 09:57:22 compute-0 sudo[220172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:57:22 compute-0 sudo[220172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:22 compute-0 sudo[220172]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:22 compute-0 sudo[220200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:57:22 compute-0 sudo[220200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:22 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 26 09:57:22 compute-0 sudo[220096]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:57:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:57:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:57:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:57:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:57:23 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:57:23 compute-0 podman[220365]: 2026-01-26 09:57:23.156452233 +0000 UTC m=+0.056915535 container create 390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_pascal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:57:23 compute-0 systemd[1]: Started libpod-conmon-390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8.scope.
Jan 26 09:57:23 compute-0 podman[220365]: 2026-01-26 09:57:23.123685663 +0000 UTC m=+0.024149055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:57:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:57:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:23.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:23 compute-0 podman[220365]: 2026-01-26 09:57:23.251590831 +0000 UTC m=+0.152054183 container init 390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_pascal, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 26 09:57:23 compute-0 podman[220365]: 2026-01-26 09:57:23.260060998 +0000 UTC m=+0.160524300 container start 390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_pascal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:57:23 compute-0 podman[220365]: 2026-01-26 09:57:23.263598436 +0000 UTC m=+0.164061728 container attach 390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_pascal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:57:23 compute-0 strange_pascal[220396]: 167 167
Jan 26 09:57:23 compute-0 systemd[1]: libpod-390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8.scope: Deactivated successfully.
Jan 26 09:57:23 compute-0 podman[220365]: 2026-01-26 09:57:23.266402448 +0000 UTC m=+0.166865760 container died 390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cb785f8657795b1840841b11e9a23633ceaae1218ee98738126eaab50cc94f6-merged.mount: Deactivated successfully.
Jan 26 09:57:23 compute-0 podman[220365]: 2026-01-26 09:57:23.305670612 +0000 UTC m=+0.206133914 container remove 390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:57:23 compute-0 sudo[220448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkoizywlgfwqausxoiibhliylnqbhjwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421442.92681-3162-50147598164436/AnsiballZ_systemd.py'
Jan 26 09:57:23 compute-0 systemd[1]: libpod-conmon-390a916edb193c4b6a8305ee1da1ca9f3e56a40823bdf9121bda2cc813ec13a8.scope: Deactivated successfully.
Jan 26 09:57:23 compute-0 sudo[220448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:23 compute-0 podman[220458]: 2026-01-26 09:57:23.50638955 +0000 UTC m=+0.053243055 container create ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khorana, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:57:23 compute-0 systemd[1]: Started libpod-conmon-ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656.scope.
Jan 26 09:57:23 compute-0 podman[220458]: 2026-01-26 09:57:23.486039877 +0000 UTC m=+0.032893382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:57:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632701ca43be2769c0c4cbff2be993275ad8a39d6f5451d060a69cfe0560dfbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632701ca43be2769c0c4cbff2be993275ad8a39d6f5451d060a69cfe0560dfbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632701ca43be2769c0c4cbff2be993275ad8a39d6f5451d060a69cfe0560dfbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632701ca43be2769c0c4cbff2be993275ad8a39d6f5451d060a69cfe0560dfbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632701ca43be2769c0c4cbff2be993275ad8a39d6f5451d060a69cfe0560dfbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:23 compute-0 podman[220458]: 2026-01-26 09:57:23.613108449 +0000 UTC m=+0.159961954 container init ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:57:23 compute-0 podman[220458]: 2026-01-26 09:57:23.620911542 +0000 UTC m=+0.167765027 container start ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 09:57:23 compute-0 podman[220458]: 2026-01-26 09:57:23.624860263 +0000 UTC m=+0.171713748 container attach ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khorana, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 09:57:23 compute-0 python3.9[220451]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:57:23 compute-0 systemd[1]: Reloading.
Jan 26 09:57:23 compute-0 systemd-sysv-generator[220507]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:57:23 compute-0 systemd-rc-local-generator[220501]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:57:23 compute-0 gifted_khorana[220474]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:57:23 compute-0 gifted_khorana[220474]: --> All data devices are unavailable
Jan 26 09:57:23 compute-0 systemd[1]: libpod-ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656.scope: Deactivated successfully.
Jan 26 09:57:23 compute-0 podman[220458]: 2026-01-26 09:57:23.975814487 +0000 UTC m=+0.522667982 container died ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 09:57:23 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 26 09:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-632701ca43be2769c0c4cbff2be993275ad8a39d6f5451d060a69cfe0560dfbb-merged.mount: Deactivated successfully.
Jan 26 09:57:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 26 09:57:24 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 26 09:57:24 compute-0 podman[220458]: 2026-01-26 09:57:24.027736141 +0000 UTC m=+0.574589626 container remove ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 09:57:24 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 26 09:57:24 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 26 09:57:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 26 09:57:24 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 26 09:57:24 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 09:57:24 compute-0 systemd[1]: libpod-conmon-ded6988233b70338646ffb76d37f5d630824d2dfe5ea05de922e9ed1c13ed656.scope: Deactivated successfully.
Jan 26 09:57:24 compute-0 sudo[220200]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:24 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 26 09:57:24 compute-0 sudo[220448]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:24 compute-0 sudo[220563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:57:24 compute-0 sudo[220563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:24 compute-0 sudo[220563]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:24 compute-0 sudo[220601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:57:24 compute-0 sudo[220601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:24 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 26 09:57:24 compute-0 sudo[220814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwrylqevyeftqjjdgbmyabfqtpqzwufk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421444.3023827-3162-120334382871245/AnsiballZ_systemd.py'
Jan 26 09:57:24 compute-0 sudo[220814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:24 compute-0 ceph-mon[74456]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:57:24 compute-0 podman[220799]: 2026-01-26 09:57:24.657970039 +0000 UTC m=+0.047867062 container create 7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_keldysh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:57:24 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 26 09:57:24 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 26 09:57:24 compute-0 systemd[1]: Started libpod-conmon-7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533.scope.
Jan 26 09:57:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:57:24 compute-0 podman[220799]: 2026-01-26 09:57:24.635923584 +0000 UTC m=+0.025820637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:57:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:57:24 compute-0 python3.9[220821]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:57:24 compute-0 systemd[1]: Reloading.
Jan 26 09:57:25 compute-0 systemd-rc-local-generator[220861]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:57:25 compute-0 systemd-sysv-generator[220864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:57:25 compute-0 podman[220799]: 2026-01-26 09:57:25.035827948 +0000 UTC m=+0.425725001 container init 7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:57:25 compute-0 podman[220799]: 2026-01-26 09:57:25.045696057 +0000 UTC m=+0.435593070 container start 7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:57:25 compute-0 crazy_keldysh[220830]: 167 167
Jan 26 09:57:25 compute-0 podman[220799]: 2026-01-26 09:57:25.062627351 +0000 UTC m=+0.452524374 container attach 7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:57:25 compute-0 podman[220799]: 2026-01-26 09:57:25.063024645 +0000 UTC m=+0.452921668 container died 7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 09:57:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:25 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:57:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:25 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:57:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:25.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:25 compute-0 systemd[1]: libpod-7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533.scope: Deactivated successfully.
Jan 26 09:57:25 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 26 09:57:25 compute-0 podman[220799]: 2026-01-26 09:57:25.311619124 +0000 UTC m=+0.701516147 container remove 7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_keldysh, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:57:25 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 26 09:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-919f3bb14495157572d60c964e11adbc73d8b96f7f01fafd3e970376a72bb571-merged.mount: Deactivated successfully.
Jan 26 09:57:25 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 26 09:57:25 compute-0 systemd[1]: libpod-conmon-7065592880fc951b6bfd383f61b241d16a0dd00b7791b1e2a6cdf341ef699533.scope: Deactivated successfully.
Jan 26 09:57:25 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 26 09:57:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 09:57:25 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 09:57:25 compute-0 sudo[220814]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:25 compute-0 podman[220916]: 2026-01-26 09:57:25.487165933 +0000 UTC m=+0.052095647 container create 4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:57:25 compute-0 systemd[1]: Started libpod-conmon-4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27.scope.
Jan 26 09:57:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:57:25 compute-0 podman[220916]: 2026-01-26 09:57:25.460510981 +0000 UTC m=+0.025440715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1331a5b7a6abd879884aa9ab91ccfb7c748f8f3ee4de8ffcd07e053c9511d6bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1331a5b7a6abd879884aa9ab91ccfb7c748f8f3ee4de8ffcd07e053c9511d6bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1331a5b7a6abd879884aa9ab91ccfb7c748f8f3ee4de8ffcd07e053c9511d6bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1331a5b7a6abd879884aa9ab91ccfb7c748f8f3ee4de8ffcd07e053c9511d6bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:25 compute-0 podman[220916]: 2026-01-26 09:57:25.592926664 +0000 UTC m=+0.157856398 container init 4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:57:25 compute-0 podman[220916]: 2026-01-26 09:57:25.600924128 +0000 UTC m=+0.165853832 container start 4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Jan 26 09:57:25 compute-0 setroubleshoot[220541]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 48c4fa73-9469-41e2-8e68-6f2c08c8c962
Jan 26 09:57:25 compute-0 setroubleshoot[220541]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 26 09:57:25 compute-0 setroubleshoot[220541]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 48c4fa73-9469-41e2-8e68-6f2c08c8c962
Jan 26 09:57:25 compute-0 setroubleshoot[220541]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 26 09:57:25 compute-0 podman[220916]: 2026-01-26 09:57:25.646534261 +0000 UTC m=+0.211463975 container attach 4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 09:57:25 compute-0 ceph-mon[74456]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:57:25 compute-0 sudo[221091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdbinvjfnjlfrmdhcetvddevfyvweabv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421445.5535693-3162-72823170230559/AnsiballZ_systemd.py'
Jan 26 09:57:25 compute-0 sudo[221091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:25 compute-0 angry_cerf[220964]: {
Jan 26 09:57:25 compute-0 angry_cerf[220964]:     "0": [
Jan 26 09:57:25 compute-0 angry_cerf[220964]:         {
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "devices": [
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "/dev/loop3"
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             ],
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "lv_name": "ceph_lv0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "lv_size": "21470642176",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "name": "ceph_lv0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "tags": {
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.cluster_name": "ceph",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.crush_device_class": "",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.encrypted": "0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.osd_id": "0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.type": "block",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.vdo": "0",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:                 "ceph.with_tpm": "0"
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             },
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "type": "block",
Jan 26 09:57:25 compute-0 angry_cerf[220964]:             "vg_name": "ceph_vg0"
Jan 26 09:57:25 compute-0 angry_cerf[220964]:         }
Jan 26 09:57:25 compute-0 angry_cerf[220964]:     ]
Jan 26 09:57:25 compute-0 angry_cerf[220964]: }
Jan 26 09:57:25 compute-0 systemd[1]: libpod-4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27.scope: Deactivated successfully.
Jan 26 09:57:25 compute-0 podman[221094]: 2026-01-26 09:57:25.981264647 +0000 UTC m=+0.027444959 container died 4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 09:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1331a5b7a6abd879884aa9ab91ccfb7c748f8f3ee4de8ffcd07e053c9511d6bf-merged.mount: Deactivated successfully.
Jan 26 09:57:26 compute-0 podman[221094]: 2026-01-26 09:57:26.035362238 +0000 UTC m=+0.081542550 container remove 4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_cerf, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 09:57:26 compute-0 systemd[1]: libpod-conmon-4df229db92257d19c4d16e41d93c0870f980c17d147d53ab28c1c325ad3caa27.scope: Deactivated successfully.
Jan 26 09:57:26 compute-0 sudo[220601]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:26 compute-0 sudo[221110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:57:26 compute-0 sudo[221110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:26 compute-0 sudo[221110]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:26 compute-0 python3.9[221093]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:57:26 compute-0 sudo[221135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:57:26 compute-0 sudo[221135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:26 compute-0 systemd[1]: Reloading.
Jan 26 09:57:26 compute-0 systemd-rc-local-generator[221189]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:57:26 compute-0 systemd-sysv-generator[221193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:57:26 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 26 09:57:26 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 26 09:57:26 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 26 09:57:26 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 26 09:57:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 26 09:57:26 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 26 09:57:26 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 26 09:57:26 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 26 09:57:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 26 09:57:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 26 09:57:26 compute-0 podman[221240]: 2026-01-26 09:57:26.614748991 +0000 UTC m=+0.043336726 container create f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:57:26 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 09:57:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:26] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:57:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:26] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:57:26 compute-0 systemd[1]: Started libpod-conmon-f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb.scope.
Jan 26 09:57:26 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 26 09:57:26 compute-0 podman[221240]: 2026-01-26 09:57:26.5945001 +0000 UTC m=+0.023087855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:57:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:57:26 compute-0 sudo[221091]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:26 compute-0 podman[221240]: 2026-01-26 09:57:26.721098248 +0000 UTC m=+0.149686003 container init f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:57:26 compute-0 podman[221240]: 2026-01-26 09:57:26.731647462 +0000 UTC m=+0.160235237 container start f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:57:26 compute-0 podman[221240]: 2026-01-26 09:57:26.736045698 +0000 UTC m=+0.164633463 container attach f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_bhaskara, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 09:57:26 compute-0 blissful_bhaskara[221279]: 167 167
Jan 26 09:57:26 compute-0 systemd[1]: libpod-f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb.scope: Deactivated successfully.
Jan 26 09:57:26 compute-0 podman[221240]: 2026-01-26 09:57:26.739875327 +0000 UTC m=+0.168463062 container died f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_bhaskara, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 09:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4a53c9e302a3853c83e2bcf999bd7a6699fa02fa6c6695bf4025b1c4bcdd839-merged.mount: Deactivated successfully.
Jan 26 09:57:26 compute-0 podman[221240]: 2026-01-26 09:57:26.791306797 +0000 UTC m=+0.219894542 container remove f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:57:26 compute-0 systemd[1]: libpod-conmon-f0e44cb0779fcb8368c7c8ed83835517e489fffcb37a0ca7e07c076562de3bcb.scope: Deactivated successfully.
Jan 26 09:57:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:57:26 compute-0 podman[221355]: 2026-01-26 09:57:26.997082826 +0000 UTC m=+0.053373536 container create b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:57:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:27.041Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:57:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:27.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:57:27 compute-0 systemd[1]: Started libpod-conmon-b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045.scope.
Jan 26 09:57:27 compute-0 podman[221355]: 2026-01-26 09:57:26.978242946 +0000 UTC m=+0.034533686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9b9bc4d8f88ad4ce6080493508cfbf4e3f8c15c7d6720f016acb4076a1adb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9b9bc4d8f88ad4ce6080493508cfbf4e3f8c15c7d6720f016acb4076a1adb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9b9bc4d8f88ad4ce6080493508cfbf4e3f8c15c7d6720f016acb4076a1adb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9b9bc4d8f88ad4ce6080493508cfbf4e3f8c15c7d6720f016acb4076a1adb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:57:27 compute-0 podman[221355]: 2026-01-26 09:57:27.099593042 +0000 UTC m=+0.155883772 container init b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:57:27 compute-0 podman[221355]: 2026-01-26 09:57:27.109674162 +0000 UTC m=+0.165964872 container start b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:57:27 compute-0 podman[221355]: 2026-01-26 09:57:27.112747137 +0000 UTC m=+0.169037847 container attach b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:57:27 compute-0 podman[221413]: 2026-01-26 09:57:27.139574971 +0000 UTC m=+0.066334750 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 09:57:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:27.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:27 compute-0 sudo[221496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kblgicvlhvhbnjvsdcoutqwwcyjfflub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421446.907028-3162-190382938688936/AnsiballZ_systemd.py'
Jan 26 09:57:27 compute-0 sudo[221496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:27.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:27 compute-0 python3.9[221499]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:57:27 compute-0 systemd[1]: Reloading.
Jan 26 09:57:27 compute-0 systemd-sysv-generator[221596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:57:27 compute-0 systemd-rc-local-generator[221590]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:57:27 compute-0 quizzical_volhard[221410]: {}
Jan 26 09:57:27 compute-0 podman[221355]: 2026-01-26 09:57:27.837712139 +0000 UTC m=+0.894002909 container died b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:57:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:27 compute-0 systemd[1]: libpod-b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045.scope: Deactivated successfully.
Jan 26 09:57:27 compute-0 systemd[1]: libpod-b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045.scope: Consumed 1.173s CPU time.
Jan 26 09:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b9b9bc4d8f88ad4ce6080493508cfbf4e3f8c15c7d6720f016acb4076a1adb6-merged.mount: Deactivated successfully.
Jan 26 09:57:27 compute-0 podman[221355]: 2026-01-26 09:57:27.972295642 +0000 UTC m=+1.028586392 container remove b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:57:27 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 26 09:57:27 compute-0 systemd[1]: libpod-conmon-b7ae6906392f2662c6ecc414d82293fbce18811d2144b4ffe099bdea0a973045.scope: Deactivated successfully.
Jan 26 09:57:27 compute-0 ceph-mon[74456]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:57:28 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 26 09:57:28 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 26 09:57:28 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 26 09:57:28 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 26 09:57:28 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 26 09:57:28 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 26 09:57:28 compute-0 sudo[221135]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:28 compute-0 lvm[221622]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:57:28 compute-0 lvm[221622]: VG ceph_vg0 finished
Jan 26 09:57:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:57:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:57:28 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:28 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 26 09:57:28 compute-0 sudo[221496]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:28 compute-0 sudo[221642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:57:28 compute-0 sudo[221642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:28 compute-0 sudo[221642]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:57:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:29 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:57:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:29.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:29 compute-0 sudo[221820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzqxtozwyiqvggbnbajhmaziqyjmhuzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421449.0011585-3273-101212254776717/AnsiballZ_file.py'
Jan 26 09:57:29 compute-0 sudo[221820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:29.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:29 compute-0 python3.9[221822]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:29 compute-0 sudo[221820]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:30 compute-0 ceph-mon[74456]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:57:30 compute-0 sudo[221972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guivrprljcthlvtnimlctfyfhiojawcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421449.8653054-3297-195363370642308/AnsiballZ_find.py'
Jan 26 09:57:30 compute-0 sudo[221972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:30 compute-0 python3.9[221974]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 09:57:30 compute-0 sudo[221972]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:30 compute-0 sshd-session[221976]: Invalid user oracle from 157.245.76.178 port 46420
Jan 26 09:57:30 compute-0 sshd-session[221976]: Connection closed by invalid user oracle 157.245.76.178 port 46420 [preauth]
Jan 26 09:57:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:57:30 compute-0 sudo[222128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bickhkvhytwugbzxlkwlzcrxqjvwwfse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421450.638936-3321-23341061286418/AnsiballZ_command.py'
Jan 26 09:57:30 compute-0 sudo[222128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:31 compute-0 python3.9[222130]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:57:31 compute-0 sudo[222128]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:57:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:31 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:57:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:31.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:31 compute-0 python3.9[222296]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 09:57:32 compute-0 ceph-mon[74456]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:57:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:32 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9234000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:57:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:33 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:33 compute-0 python3.9[222450]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:33 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:33.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:33.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:57:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:33 compute-0 python3.9[222571]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421452.5694916-3378-124416079093825/.source.xml follow=False _original_basename=secret.xml.j2 checksum=8bb860fb1574c7989940fddd89a1bc8580864aba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:34 compute-0 ceph-mon[74456]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:57:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:34 compute-0 sudo[222722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzcvisdrpohhaiestfkxcfaxxothobja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421454.0682642-3423-213518719996152/AnsiballZ_command.py'
Jan 26 09:57:34 compute-0 sudo[222722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095734 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:57:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:34 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:34 compute-0 python3.9[222725]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 1a70b85d-e3fd-5814-8a6a-37ea00fcae30
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:57:34 compute-0 polkitd[43452]: Registered Authentication Agent for unix-process:222727:340827 (system bus name :1.2910 [pkttyagent --process 222727 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 26 09:57:34 compute-0 polkitd[43452]: Unregistered Authentication Agent for unix-process:222727:340827 (system bus name :1.2910, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 26 09:57:34 compute-0 polkitd[43452]: Registered Authentication Agent for unix-process:222726:340826 (system bus name :1.2911 [pkttyagent --process 222726 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 26 09:57:34 compute-0 polkitd[43452]: Unregistered Authentication Agent for unix-process:222726:340826 (system bus name :1.2911, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 26 09:57:34 compute-0 sudo[222722]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:57:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:35 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9228001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:35 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:35.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:35.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:35 compute-0 python3.9[222887]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:35 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 26 09:57:35 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 26 09:57:36 compute-0 ceph-mon[74456]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:57:36 compute-0 sudo[223037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjsquxyhujyixlvmwkymjiorvgrhggwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421455.9776704-3471-115072025248783/AnsiballZ_command.py'
Jan 26 09:57:36 compute-0 sudo[223037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:36 compute-0 sudo[223037]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:36 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:36] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:57:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:36] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:57:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:57:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:37.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:57:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:37 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:37 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:37 compute-0 sudo[223192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlktvwhwjmgjenodoolyqteulkkvebqo ; FSID=1a70b85d-e3fd-5814-8a6a-37ea00fcae30 KEY=AQDlNXdpAAAAABAAkYdaCUlKVeiqmlhElLFrLA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421456.9143505-3495-17122880638771/AnsiballZ_command.py'
Jan 26 09:57:37 compute-0 sudo[223192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:37 compute-0 polkitd[43452]: Registered Authentication Agent for unix-process:223195:341116 (system bus name :1.2914 [pkttyagent --process 223195 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 26 09:57:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 09:57:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:37.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 09:57:37 compute-0 polkitd[43452]: Unregistered Authentication Agent for unix-process:223195:341116 (system bus name :1.2914, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 26 09:57:37 compute-0 sudo[223192]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:38 compute-0 sudo[223350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjyxsehdgxsnqozfadepejlslofpmeba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421457.8847818-3519-20387039171248/AnsiballZ_copy.py'
Jan 26 09:57:38 compute-0 sudo[223350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:38 compute-0 ceph-mon[74456]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:57:38 compute-0 python3.9[223352]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:38 compute-0 sudo[223350]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:38 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:38 compute-0 sudo[223379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:57:38 compute-0 sudo[223379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:38 compute-0 sudo[223379]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:57:39 compute-0 sudo[223545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjqptudwlcmlosutthkmdwulupqenaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421458.7756782-3543-250908459423552/AnsiballZ_stat.py'
Jan 26 09:57:39 compute-0 sudo[223545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:39 compute-0 podman[223503]: 2026-01-26 09:57:39.122701536 +0000 UTC m=+0.092088035 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 09:57:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:39 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:39 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:39 compute-0 python3.9[223551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:39 compute-0 sudo[223545]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:57:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:39.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:57:39 compute-0 sudo[223678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbouhywjpsnfymkykaqshqrcigjlawuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421458.7756782-3543-250908459423552/AnsiballZ_copy.py'
Jan 26 09:57:39 compute-0 sudo[223678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:39 compute-0 python3.9[223680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421458.7756782-3543-250908459423552/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:39 compute-0 sudo[223678]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:40 compute-0 ceph-mon[74456]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:57:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:40 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:40 compute-0 sudo[223832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kozuffvwputidmqrohiaofsdlbvxnhbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421460.4767468-3591-160918057407563/AnsiballZ_file.py'
Jan 26 09:57:40 compute-0 sudo[223832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:57:40 compute-0 python3.9[223834]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:41 compute-0 sudo[223832]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:41 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224001950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:41 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:57:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:41.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:57:41 compute-0 sudo[223984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lclqsykeejopeiwxwrltjtrwhwxugfzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421461.6712615-3615-263255894769832/AnsiballZ_stat.py'
Jan 26 09:57:41 compute-0 sudo[223984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:42 compute-0 python3.9[223986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:42 compute-0 sudo[223984]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:42 compute-0 ceph-mon[74456]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:57:42 compute-0 sudo[224064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwtdufhjuemwyrhjhklsqjvhwcmnwxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421461.6712615-3615-263255894769832/AnsiballZ_file.py'
Jan 26 09:57:42 compute-0 sudo[224064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:42 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:42 compute-0 python3.9[224066]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:42 compute-0 sudo[224064]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:43 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:43 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:57:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:43.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:57:43 compute-0 sudo[224216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sliyglzcdwlzlqzqsgivnqheedxejnpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421463.200906-3651-11051822385627/AnsiballZ_stat.py'
Jan 26 09:57:43 compute-0 sudo[224216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:43 compute-0 python3.9[224218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:43 compute-0 sudo[224216]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:43 compute-0 sudo[224294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhbvpwoedjbwhtpvrdskprwychuyspfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421463.200906-3651-11051822385627/AnsiballZ_file.py'
Jan 26 09:57:43 compute-0 sudo[224294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:44 compute-0 python3.9[224296]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8eghvw3w recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:44 compute-0 sudo[224294]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:44 compute-0 ceph-mon[74456]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:44 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:44 compute-0 sudo[224448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abckrsiuoivstakxtogldmsvuvnudpnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421464.6131296-3687-247852996452424/AnsiballZ_stat.py'
Jan 26 09:57:44 compute-0 sudo[224448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:45 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:45 compute-0 python3.9[224450]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:45 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:45 compute-0 sudo[224448]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:57:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:57:45 compute-0 sudo[224526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpatrmdsomqhdspejvizhiixladaopy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421464.6131296-3687-247852996452424/AnsiballZ_file.py'
Jan 26 09:57:45 compute-0 sudo[224526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:45.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:45 compute-0 python3.9[224528]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:45 compute-0 sudo[224526]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:46 compute-0 ceph-mon[74456]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:57:46 compute-0 sudo[224680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqndehijlkoqovkgdejggbawcgvvpegr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421466.1696877-3726-34719125052637/AnsiballZ_command.py'
Jan 26 09:57:46 compute-0 sudo[224680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:46 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:46] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:57:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:46] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 09:57:46 compute-0 python3.9[224682]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:57:46 compute-0 sudo[224680]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:47.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:57:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:47 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:47 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:47.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:47 compute-0 sudo[224833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbediweylscwdstbqfanhfijmebgphoc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769421467.1103094-3750-255295044485158/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 09:57:47 compute-0 sudo[224833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:47 compute-0 python3[224835]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 09:57:47 compute-0 sudo[224833]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:48 compute-0 ceph-mon[74456]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:48 compute-0 sudo[224987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mphvijbwmpdgzeeiaxjhigxjuxytukfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421468.1537743-3774-232569329072326/AnsiballZ_stat.py'
Jan 26 09:57:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:48 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:48 compute-0 sudo[224987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:57:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:48 compute-0 python3.9[224989]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:57:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:57:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:57:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:57:48 compute-0 sudo[224987]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:57:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:57:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:48 compute-0 sudo[225065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xudunrqqafcsowcmsyqidohcrtxmheuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421468.1537743-3774-232569329072326/AnsiballZ_file.py'
Jan 26 09:57:48 compute-0 sudo[225065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:49 compute-0 python3.9[225067]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:49 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:49 compute-0 sudo[225065]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:49 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:49.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:57:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:49.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:49 compute-0 sudo[225217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvlbfunbkkkfrxvtrdzlbrecsninuple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421469.5256934-3810-258314955657677/AnsiballZ_stat.py'
Jan 26 09:57:49 compute-0 sudo[225217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:50 compute-0 python3.9[225219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:50 compute-0 sudo[225217]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:50 compute-0 ceph-mon[74456]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:50 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:50 compute-0 sudo[225344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghmvhrlnzkrcjkucmulcvnigtbnsydr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421469.5256934-3810-258314955657677/AnsiballZ_copy.py'
Jan 26 09:57:50 compute-0 sudo[225344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:50 compute-0 python3.9[225346]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421469.5256934-3810-258314955657677/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:50 compute-0 sudo[225344]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:57:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:51 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:51 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:51.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:51 compute-0 sudo[225496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afqkhhtsawegwifnsswoibennuhiopio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421471.1066308-3855-107246883902067/AnsiballZ_stat.py'
Jan 26 09:57:51 compute-0 sudo[225496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:51.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:52 compute-0 python3.9[225498]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:52 compute-0 sudo[225496]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:52 compute-0 ceph-mon[74456]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:57:52 compute-0 sudo[225576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxawygxbcpjyqxfdhpxkefujqtpgkfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421471.1066308-3855-107246883902067/AnsiballZ_file.py'
Jan 26 09:57:52 compute-0 sudo[225576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:52 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:52 compute-0 python3.9[225578]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:52 compute-0 sudo[225576]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:53 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:53 compute-0 sudo[225728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdhmhrjwcpqynvbfnkcxdhlmzgnczsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421472.8845367-3891-241331248809194/AnsiballZ_stat.py'
Jan 26 09:57:53 compute-0 sudo[225728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:53 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:53 compute-0 python3.9[225730]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:53 compute-0 sudo[225728]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:53.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:53 compute-0 sudo[225806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oovuoccanafjygkmygpmrfpipwioddqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421472.8845367-3891-241331248809194/AnsiballZ_file.py'
Jan 26 09:57:53 compute-0 sudo[225806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:53 compute-0 python3.9[225808]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:53 compute-0 sudo[225806]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:54 compute-0 ceph-mon[74456]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:54 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:57:54.680 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:57:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:57:54.681 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:57:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:57:54.681 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:57:54 compute-0 sudo[225960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsaiaxspmhcuzdcrfrushhvzvrzirtvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421474.4790754-3927-72533996569945/AnsiballZ_stat.py'
Jan 26 09:57:54 compute-0 sudo[225960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:57:55 compute-0 python3.9[225962]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:57:55 compute-0 sudo[225960]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:55 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:55 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:55 compute-0 sudo[226085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clmdqbgdtpiyremmocfeyvbovlirmugz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421474.4790754-3927-72533996569945/AnsiballZ_copy.py'
Jan 26 09:57:55 compute-0 sudo[226085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:55.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:55 compute-0 python3.9[226087]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769421474.4790754-3927-72533996569945/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:55 compute-0 sudo[226085]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:56 compute-0 ceph-mon[74456]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:57:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:56 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:56] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:57:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:57:56] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:57:56 compute-0 sudo[226239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqfdhjxoelgofbzcskqiqhaainmifpnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421476.3365505-3972-70577345404590/AnsiballZ_file.py'
Jan 26 09:57:56 compute-0 sudo[226239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:56 compute-0 python3.9[226241]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:56 compute-0 sudo[226239]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:57.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:57:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:57:57.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:57:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:57 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:57 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:57.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:57 compute-0 sudo[226404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwumjxrtnuyiwhabxpzdvsqdrpikxjun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421477.2399344-3996-278244662850101/AnsiballZ_command.py'
Jan 26 09:57:57 compute-0 sudo[226404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:57 compute-0 podman[226365]: 2026-01-26 09:57:57.561969488 +0000 UTC m=+0.052375966 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 09:57:57 compute-0 python3.9[226412]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:57:57 compute-0 sudo[226404]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:57:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095757 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:57:58 compute-0 ceph-mon[74456]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:58 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:58 compute-0 sudo[226567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyitijryzkfdzfzshtfdovfnfelvnzvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421478.1150436-4020-78612588355702/AnsiballZ_blockinfile.py'
Jan 26 09:57:58 compute-0 sudo[226567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:58 compute-0 sudo[226570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:57:58 compute-0 sudo[226570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:57:58 compute-0 sudo[226570]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:58 compute-0 python3.9[226569]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:57:58 compute-0 sudo[226567]: pam_unix(sudo:session): session closed for user root
Jan 26 09:57:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:57:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:59 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:57:59 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:57:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:57:59.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:57:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:57:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:57:59.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:57:59 compute-0 sudo[226744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enllxfrfqlqvfpvnaomwswgjkjuuzhkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421479.3543608-4047-228949716049508/AnsiballZ_command.py'
Jan 26 09:57:59 compute-0 sudo[226744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:57:59 compute-0 python3.9[226746]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:57:59 compute-0 sudo[226744]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:00 compute-0 sudo[226899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdebljunlvgkxedrhfmuoptschqeqmxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421480.192608-4071-130766581730239/AnsiballZ_stat.py'
Jan 26 09:58:00 compute-0 sudo[226899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:00 compute-0 ceph-mon[74456]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:58:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:00 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:00 compute-0 python3.9[226901]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:58:00 compute-0 sudo[226899]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:58:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:01 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:01 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9224002e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:01 compute-0 sudo[227054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrdcwwgfpbojzzaaoqchuzafhrqokada ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421481.0577607-4095-110891286379252/AnsiballZ_command.py'
Jan 26 09:58:01 compute-0 sudo[227054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 09:58:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 09:58:01 compute-0 python3.9[227056]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:58:01 compute-0 sudo[227054]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:01 compute-0 ceph-mon[74456]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 09:58:02 compute-0 sudo[227209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgwdedooaexhxwivqrzxxajwiyiyvsbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421481.9749699-4119-36353396663035/AnsiballZ_file.py'
Jan 26 09:58:02 compute-0 sudo[227209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:02 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:02 compute-0 python3.9[227211]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:02 compute-0 sudo[227209]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:58:03 compute-0 sudo[227363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iirvxtvsxutlkienforlqdmuomsvdrvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421482.8125606-4143-96913119673609/AnsiballZ_stat.py'
Jan 26 09:58:03 compute-0 sudo[227363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:03 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:03 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:03 compute-0 python3.9[227365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:58:03 compute-0 sudo[227363]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:58:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:58:03 compute-0 sudo[227486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-retuffezfycdarxgeimklcfivwpezxop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421482.8125606-4143-96913119673609/AnsiballZ_copy.py'
Jan 26 09:58:03 compute-0 sudo[227486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:58:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:03 compute-0 python3.9[227488]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421482.8125606-4143-96913119673609/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:03 compute-0 sudo[227486]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:04 compute-0 ceph-mon[74456]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:58:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:04 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:04 compute-0 sudo[227640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hddcmdwznpnixfficgwcgymnrviwavpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421484.4771273-4188-69310559876089/AnsiballZ_stat.py'
Jan 26 09:58:04 compute-0 sudo[227640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:05 compute-0 python3.9[227642]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:58:05 compute-0 sudo[227640]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:05 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9208000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:05 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:05.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:05 compute-0 sudo[227763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujioefbwocoiqxzdotbawuhumiwmwhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421484.4771273-4188-69310559876089/AnsiballZ_copy.py'
Jan 26 09:58:05 compute-0 sudo[227763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:05.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:05 compute-0 python3.9[227765]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421484.4771273-4188-69310559876089/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:05 compute-0 sudo[227763]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:06 compute-0 ceph-mon[74456]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:06 compute-0 sudo[227915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siggeldsqguneatutmtgkjbiadiamggc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421485.9825523-4233-47796043642120/AnsiballZ_stat.py'
Jan 26 09:58:06 compute-0 sudo[227915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:06 compute-0 python3.9[227917]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:58:06 compute-0 sudo[227915]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:06 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9214003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:58:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:58:06 compute-0 sudo[228040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihkemdlcclytfzntdytonhathtqcqaku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421485.9825523-4233-47796043642120/AnsiballZ_copy.py'
Jan 26 09:58:06 compute-0 sudo[228040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:07 compute-0 python3.9[228042]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421485.9825523-4233-47796043642120/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:07 compute-0 sudo[228040]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:07.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:58:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:07.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:58:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:07 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f92280045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:07 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9208001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:07 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:58:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:07.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:07.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:07 compute-0 sudo[228192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oinvqmmusqkpnmnraqlwivbzuyaokpzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421487.5851548-4278-830904140071/AnsiballZ_systemd.py'
Jan 26 09:58:07 compute-0 sudo[228192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:08 compute-0 python3.9[228194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:58:08 compute-0 systemd[1]: Reloading.
Jan 26 09:58:08 compute-0 systemd-rc-local-generator[228220]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:58:08 compute-0 systemd-sysv-generator[228224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:58:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[218841]: 26/01/2026 09:58:08 : epoch 69773a7e : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9210003db0 fd 48 proxy ignored for local
Jan 26 09:58:08 compute-0 kernel: ganesha.nfsd[222324]: segfault at 50 ip 00007f92b4e7932e sp 00007f921dffa210 error 4 in libntirpc.so.5.8[7f92b4e5e000+2c000] likely on CPU 5 (core 0, socket 5)
Jan 26 09:58:08 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:58:08 compute-0 systemd[1]: Started Process Core Dump (PID 228232/UID 0).
Jan 26 09:58:08 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 26 09:58:08 compute-0 sudo[228192]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:08 compute-0 ceph-mon[74456]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:09 compute-0 sudo[228396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecamkunidyevpwvlerxhujjopwqbzhxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421488.9252357-4302-168939369672451/AnsiballZ_systemd.py'
Jan 26 09:58:09 compute-0 sudo[228396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:09 compute-0 podman[228361]: 2026-01-26 09:58:09.374301862 +0000 UTC m=+0.108922600 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 09:58:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:09.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:09 compute-0 python3.9[228409]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 09:58:09 compute-0 systemd[1]: Reloading.
Jan 26 09:58:09 compute-0 systemd-coredump[228235]: Process 218874 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f92b4e7932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:58:09 compute-0 systemd-rc-local-generator[228442]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:58:09 compute-0 systemd-sysv-generator[228447]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:58:09 compute-0 podman[228453]: 2026-01-26 09:58:09.885146316 +0000 UTC m=+0.029063988 container died 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:58:09 compute-0 systemd[1]: systemd-coredump@9-228232-0.service: Deactivated successfully.
Jan 26 09:58:09 compute-0 systemd[1]: systemd-coredump@9-228232-0.service: Consumed 1.081s CPU time.
Jan 26 09:58:10 compute-0 systemd[1]: Reloading.
Jan 26 09:58:10 compute-0 systemd-rc-local-generator[228494]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:58:10 compute-0 systemd-sysv-generator[228498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ac777dec9495ab96cfe54510ac1080d6a5df76d21dd5c638be20c8620f439a5-merged.mount: Deactivated successfully.
Jan 26 09:58:10 compute-0 ceph-mon[74456]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:10 compute-0 sudo[228396]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:10 compute-0 sshd-session[167222]: Connection closed by 192.168.122.30 port 44368
Jan 26 09:58:10 compute-0 sshd-session[167219]: pam_unix(sshd:session): session closed for user zuul
Jan 26 09:58:10 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 26 09:58:10 compute-0 systemd[1]: session-53.scope: Consumed 3min 40.581s CPU time.
Jan 26 09:58:10 compute-0 systemd-logind[787]: Session 53 logged out. Waiting for processes to exit.
Jan 26 09:58:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:58:10 compute-0 systemd-logind[787]: Removed session 53.
Jan 26 09:58:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 09:58:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:11.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 09:58:11 compute-0 podman[228453]: 2026-01-26 09:58:11.290700415 +0000 UTC m=+1.434618107 container remove 37c7ff9dac09a5e0a9ab0a34a0788c19e3f5294b2735d5f22fe7c31b60a37cb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:58:11 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:58:11 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:58:11 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.431s CPU time.
Jan 26 09:58:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:11.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:11 compute-0 ceph-mon[74456]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:58:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:58:12 compute-0 sshd-session[228566]: Invalid user oracle from 157.245.76.178 port 39870
Jan 26 09:58:13 compute-0 sshd-session[228566]: Connection closed by invalid user oracle 157.245.76.178 port 39870 [preauth]
Jan 26 09:58:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:13.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:13.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:14 compute-0 ceph-mon[74456]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 09:58:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095814 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:58:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:58:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:15.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:16 compute-0 ceph-mon[74456]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 09:58:16 compute-0 sshd-session[228572]: Accepted publickey for zuul from 192.168.122.30 port 33150 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 09:58:16 compute-0 systemd-logind[787]: New session 54 of user zuul.
Jan 26 09:58:16 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 26 09:58:16 compute-0 sshd-session[228572]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 09:58:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:58:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:58:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:58:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:17.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:58:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:17.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:17.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:17 compute-0 python3.9[228725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:58:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:18 compute-0 ceph-mon[74456]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:58:18
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'backups', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.nfs', 'images', 'cephfs.cephfs.meta']
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:58:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:58:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:58:18 compute-0 sudo[228831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:58:18 compute-0 sudo[228831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:18 compute-0 sudo[228831]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:58:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:58:19 compute-0 python3.9[228906]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:58:19 compute-0 network[228923]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:58:19 compute-0 network[228924]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:58:19 compute-0 network[228925]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:58:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:19.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095820 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:58:20 compute-0 ceph-mon[74456]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:58:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:58:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:21.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:21 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 10.
Jan 26 09:58:21 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:58:21 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.431s CPU time.
Jan 26 09:58:21 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:58:21 compute-0 podman[229061]: 2026-01-26 09:58:21.787830096 +0000 UTC m=+0.022842890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:58:22 compute-0 podman[229061]: 2026-01-26 09:58:22.058647336 +0000 UTC m=+0.293660110 container create 2c85436f6539f346b1fec68746c76935048c020698a8d26f072ed09526303db5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f657b7fc7c63d31bbbf0b0479dc16d94aedcd853a9647d9c2556063905fa89cb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f657b7fc7c63d31bbbf0b0479dc16d94aedcd853a9647d9c2556063905fa89cb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f657b7fc7c63d31bbbf0b0479dc16d94aedcd853a9647d9c2556063905fa89cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f657b7fc7c63d31bbbf0b0479dc16d94aedcd853a9647d9c2556063905fa89cb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:22 compute-0 podman[229061]: 2026-01-26 09:58:22.144739635 +0000 UTC m=+0.379752489 container init 2c85436f6539f346b1fec68746c76935048c020698a8d26f072ed09526303db5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:58:22 compute-0 podman[229061]: 2026-01-26 09:58:22.149441706 +0000 UTC m=+0.384454510 container start 2c85436f6539f346b1fec68746c76935048c020698a8d26f072ed09526303db5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:58:22 compute-0 bash[229061]: 2c85436f6539f346b1fec68746c76935048c020698a8d26f072ed09526303db5
Jan 26 09:58:22 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:58:22 compute-0 ceph-mon[74456]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 852 B/s wr, 2 op/s
Jan 26 09:58:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:22 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:58:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 26 09:58:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:23.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:23.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:24 compute-0 ceph-mon[74456]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Jan 26 09:58:24 compute-0 sudo[229306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gluwcindsxwfxxavzpjpujlzdspngtqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421504.598926-96-208541520935263/AnsiballZ_setup.py'
Jan 26 09:58:24 compute-0 sudo[229306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:58:25 compute-0 python3.9[229308]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 09:58:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:25 compute-0 sudo[229306]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:25.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:26 compute-0 sudo[229390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szchjeadtuzgdzzlcmzlmcucweseotks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421504.598926-96-208541520935263/AnsiballZ_dnf.py'
Jan 26 09:58:26 compute-0 sudo[229390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:26 compute-0 ceph-mon[74456]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:58:26 compute-0 python3.9[229392]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:58:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:26] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 26 09:58:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:26] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 26 09:58:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:27.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:58:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:27.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:58:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:27.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:58:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 09:58:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 4130 writes, 18K keys, 4130 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 4130 writes, 4130 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1464 writes, 5936 keys, 1464 commit groups, 1.0 writes per commit group, ingest: 10.87 MB, 0.02 MB/s
                                           Interval WAL: 1464 writes, 1464 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     62.9      0.46              0.07         8    0.057       0      0       0.0       0.0
                                             L6      1/0   11.06 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.1    197.2    164.6      0.54              0.19         7    0.077     32K   3837       0.0       0.0
                                            Sum      1/0   11.06 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.1    106.8    117.9      0.99              0.26        15    0.066     32K   3837       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    160.9    149.6      0.32              0.11         6    0.053     16K   2046       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    197.2    164.6      0.54              0.19         7    0.077     32K   3837       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    145.1      0.20              0.07         7    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.028, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.11 GB write, 0.10 MB/s write, 0.10 GB read, 0.09 MB/s read, 1.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a9cd69b350#2 capacity: 304.00 MB usage: 5.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000108 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(324,5.12 MB,1.6833%) FilterBlock(16,98.67 KB,0.0316971%) IndexBlock(16,198.50 KB,0.0637657%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 09:58:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:27.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:27.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095828 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:58:28 compute-0 podman[229396]: 2026-01-26 09:58:28.128954274 +0000 UTC m=+0.062207467 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 26 09:58:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:28 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:58:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:28 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:58:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:28 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 09:58:28 compute-0 ceph-mon[74456]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:28 compute-0 sudo[229418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:58:28 compute-0 sudo[229418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:28 compute-0 sudo[229418]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:28 compute-0 sudo[229443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:58:28 compute-0 sudo[229443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:29 compute-0 sudo[229443]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:58:29 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:58:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:58:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:58:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:58:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:58:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:58:29 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:58:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:58:29 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:58:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:58:29 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:58:29 compute-0 sudo[229500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:58:29 compute-0 sudo[229500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:29 compute-0 sudo[229500]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:29.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:29 compute-0 sudo[229525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:58:29 compute-0 sudo[229525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:29 compute-0 podman[229590]: 2026-01-26 09:58:29.91856599 +0000 UTC m=+0.040526136 container create dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 09:58:29 compute-0 systemd[1]: Started libpod-conmon-dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095.scope.
Jan 26 09:58:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:58:29 compute-0 podman[229590]: 2026-01-26 09:58:29.900340818 +0000 UTC m=+0.022300974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:58:30 compute-0 podman[229590]: 2026-01-26 09:58:30.000521757 +0000 UTC m=+0.122481923 container init dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 09:58:30 compute-0 podman[229590]: 2026-01-26 09:58:30.006673724 +0000 UTC m=+0.128633860 container start dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 09:58:30 compute-0 podman[229590]: 2026-01-26 09:58:30.010126344 +0000 UTC m=+0.132086590 container attach dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:58:30 compute-0 infallible_agnesi[229606]: 167 167
Jan 26 09:58:30 compute-0 systemd[1]: libpod-dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095.scope: Deactivated successfully.
Jan 26 09:58:30 compute-0 podman[229590]: 2026-01-26 09:58:30.011357694 +0000 UTC m=+0.133317840 container died dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 09:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f1394bbd408bf94f71cf1a06d9b5941c0f5b703c928228189bf86eb21b30562-merged.mount: Deactivated successfully.
Jan 26 09:58:30 compute-0 podman[229590]: 2026-01-26 09:58:30.216830675 +0000 UTC m=+0.338790821 container remove dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:58:30 compute-0 systemd[1]: libpod-conmon-dfc69357152af8f4555c51621f28ea3e4f9b7e44487ce836f0f4fadf314c0095.scope: Deactivated successfully.
Jan 26 09:58:30 compute-0 podman[229632]: 2026-01-26 09:58:30.379291304 +0000 UTC m=+0.050562846 container create 7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 09:58:30 compute-0 ceph-mon[74456]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 09:58:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:58:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:58:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:58:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:58:30 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:58:30 compute-0 systemd[1]: Started libpod-conmon-7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6.scope.
Jan 26 09:58:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f081e20c42a594213d0591a5a1e98d6a703b6e4fcd59976254db929398cd8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f081e20c42a594213d0591a5a1e98d6a703b6e4fcd59976254db929398cd8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f081e20c42a594213d0591a5a1e98d6a703b6e4fcd59976254db929398cd8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:30 compute-0 podman[229632]: 2026-01-26 09:58:30.354036787 +0000 UTC m=+0.025308389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f081e20c42a594213d0591a5a1e98d6a703b6e4fcd59976254db929398cd8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f081e20c42a594213d0591a5a1e98d6a703b6e4fcd59976254db929398cd8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:30 compute-0 podman[229632]: 2026-01-26 09:58:30.458508954 +0000 UTC m=+0.129780516 container init 7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_antonelli, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:58:30 compute-0 podman[229632]: 2026-01-26 09:58:30.472835792 +0000 UTC m=+0.144107344 container start 7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:58:30 compute-0 podman[229632]: 2026-01-26 09:58:30.476987354 +0000 UTC m=+0.148258896 container attach 7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 09:58:30 compute-0 cool_antonelli[229650]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:58:30 compute-0 cool_antonelli[229650]: --> All data devices are unavailable
Jan 26 09:58:30 compute-0 systemd[1]: libpod-7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6.scope: Deactivated successfully.
Jan 26 09:58:30 compute-0 podman[229632]: 2026-01-26 09:58:30.786921043 +0000 UTC m=+0.458192595 container died 7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_antonelli, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-34f081e20c42a594213d0591a5a1e98d6a703b6e4fcd59976254db929398cd8a-merged.mount: Deactivated successfully.
Jan 26 09:58:30 compute-0 podman[229632]: 2026-01-26 09:58:30.832637303 +0000 UTC m=+0.503908845 container remove 7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_antonelli, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:58:30 compute-0 systemd[1]: libpod-conmon-7340afa9082f26b5a1b265e2961812618d498349f5ac742b0b3cf1b107cab6a6.scope: Deactivated successfully.
Jan 26 09:58:30 compute-0 sudo[229525]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:58:30 compute-0 sudo[229676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:58:30 compute-0 sudo[229676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:30 compute-0 sudo[229676]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:30 compute-0 sudo[229701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:58:30 compute-0 sudo[229701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:31 compute-0 podman[229768]: 2026-01-26 09:58:31.383962881 +0000 UTC m=+0.046162485 container create 8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:58:31 compute-0 systemd[1]: Started libpod-conmon-8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323.scope.
Jan 26 09:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:58:31 compute-0 podman[229768]: 2026-01-26 09:58:31.443511943 +0000 UTC m=+0.105711557 container init 8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 09:58:31 compute-0 podman[229768]: 2026-01-26 09:58:31.450043931 +0000 UTC m=+0.112243535 container start 8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 09:58:31 compute-0 podman[229768]: 2026-01-26 09:58:31.453471561 +0000 UTC m=+0.115671155 container attach 8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 09:58:31 compute-0 amazing_greider[229784]: 167 167
Jan 26 09:58:31 compute-0 systemd[1]: libpod-8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323.scope: Deactivated successfully.
Jan 26 09:58:31 compute-0 podman[229768]: 2026-01-26 09:58:31.455839776 +0000 UTC m=+0.118039380 container died 8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:58:31 compute-0 podman[229768]: 2026-01-26 09:58:31.361189834 +0000 UTC m=+0.023389458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b8ba9028cf78d40b7a9fd7850cb8e86ce9cf2f495f1a40b3deeaea51fd65c83-merged.mount: Deactivated successfully.
Jan 26 09:58:31 compute-0 podman[229768]: 2026-01-26 09:58:31.486570468 +0000 UTC m=+0.148770072 container remove 8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_greider, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 09:58:31 compute-0 systemd[1]: libpod-conmon-8a2533ebc60f2a8c73cba079ede1220f8bf8b467fd00bc768a1aeaed28d45323.scope: Deactivated successfully.
Jan 26 09:58:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:31.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:31 compute-0 podman[229808]: 2026-01-26 09:58:31.635260976 +0000 UTC m=+0.042864649 container create ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meninsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Jan 26 09:58:31 compute-0 systemd[1]: Started libpod-conmon-ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e.scope.
Jan 26 09:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3785cdc796865a5be534e19ee2592ff70403e0ff60f7fff59fc1d1c20bf95ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3785cdc796865a5be534e19ee2592ff70403e0ff60f7fff59fc1d1c20bf95ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3785cdc796865a5be534e19ee2592ff70403e0ff60f7fff59fc1d1c20bf95ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3785cdc796865a5be534e19ee2592ff70403e0ff60f7fff59fc1d1c20bf95ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:31 compute-0 podman[229808]: 2026-01-26 09:58:31.61594917 +0000 UTC m=+0.023552853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:58:31 compute-0 podman[229808]: 2026-01-26 09:58:31.724856128 +0000 UTC m=+0.132459801 container init ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 26 09:58:31 compute-0 podman[229808]: 2026-01-26 09:58:31.731732167 +0000 UTC m=+0.139335830 container start ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meninsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:58:31 compute-0 podman[229808]: 2026-01-26 09:58:31.734769925 +0000 UTC m=+0.142373638 container attach ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meninsky, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:58:31 compute-0 sudo[229390]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]: {
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:     "0": [
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:         {
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "devices": [
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "/dev/loop3"
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             ],
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "lv_name": "ceph_lv0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "lv_size": "21470642176",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "name": "ceph_lv0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "tags": {
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.cluster_name": "ceph",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.crush_device_class": "",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.encrypted": "0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.osd_id": "0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.type": "block",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.vdo": "0",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:                 "ceph.with_tpm": "0"
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             },
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "type": "block",
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:             "vg_name": "ceph_vg0"
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:         }
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]:     ]
Jan 26 09:58:32 compute-0 stupefied_meninsky[229824]: }
Jan 26 09:58:32 compute-0 systemd[1]: libpod-ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e.scope: Deactivated successfully.
Jan 26 09:58:32 compute-0 podman[229808]: 2026-01-26 09:58:32.049933701 +0000 UTC m=+0.457537384 container died ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 09:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3785cdc796865a5be534e19ee2592ff70403e0ff60f7fff59fc1d1c20bf95ef-merged.mount: Deactivated successfully.
Jan 26 09:58:32 compute-0 podman[229808]: 2026-01-26 09:58:32.092364905 +0000 UTC m=+0.499968568 container remove ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:58:32 compute-0 systemd[1]: libpod-conmon-ef9b0aa9d53584118a780ad2105d3335afa95c76a4a9d4ad21364834b662461e.scope: Deactivated successfully.
Jan 26 09:58:32 compute-0 sudo[229701]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:32 compute-0 sudo[229920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:58:32 compute-0 sudo[229920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:32 compute-0 sudo[229920]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:32 compute-0 sudo[229945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:58:32 compute-0 sudo[229945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:32 compute-0 ceph-mon[74456]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:58:32 compute-0 sudo[230078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgtvsfzqogjaukiwdqhskljlfrvopdbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421512.0837512-132-70078007185331/AnsiballZ_stat.py'
Jan 26 09:58:32 compute-0 sudo[230078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:32 compute-0 podman[230088]: 2026-01-26 09:58:32.648183397 +0000 UTC m=+0.048693907 container create 49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 09:58:32 compute-0 systemd[1]: Started libpod-conmon-49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df.scope.
Jan 26 09:58:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:58:32 compute-0 podman[230088]: 2026-01-26 09:58:32.628593771 +0000 UTC m=+0.029104331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:58:32 compute-0 podman[230088]: 2026-01-26 09:58:32.724241916 +0000 UTC m=+0.124752436 container init 49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_burnell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:58:32 compute-0 podman[230088]: 2026-01-26 09:58:32.730538217 +0000 UTC m=+0.131048727 container start 49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_burnell, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:58:32 compute-0 podman[230088]: 2026-01-26 09:58:32.734150542 +0000 UTC m=+0.134661052 container attach 49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:58:32 compute-0 jovial_burnell[230104]: 167 167
Jan 26 09:58:32 compute-0 systemd[1]: libpod-49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df.scope: Deactivated successfully.
Jan 26 09:58:32 compute-0 conmon[230104]: conmon 49460f791e79b073462a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df.scope/container/memory.events
Jan 26 09:58:32 compute-0 podman[230088]: 2026-01-26 09:58:32.736664222 +0000 UTC m=+0.137174732 container died 49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_burnell, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:58:32 compute-0 python3.9[230083]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d39980df0a6833349910a14ce903b13d1a37adc0c6ae5da1bbd423e1519e701-merged.mount: Deactivated successfully.
Jan 26 09:58:32 compute-0 sudo[230078]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:32 compute-0 podman[230088]: 2026-01-26 09:58:32.772224229 +0000 UTC m=+0.172734739 container remove 49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:58:32 compute-0 systemd[1]: libpod-conmon-49460f791e79b073462a4d518fe7c7af6dabc8222960cf57bb001f406dceb4df.scope: Deactivated successfully.
Jan 26 09:58:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:32 compute-0 podman[230151]: 2026-01-26 09:58:32.934907704 +0000 UTC m=+0.051122714 container create 1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 09:58:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 425 B/s wr, 1 op/s
Jan 26 09:58:32 compute-0 systemd[1]: Started libpod-conmon-1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af.scope.
Jan 26 09:58:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:32 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:58:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:33 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:58:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:33 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:58:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:33 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 09:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:58:33 compute-0 podman[230151]: 2026-01-26 09:58:32.913657766 +0000 UTC m=+0.029872826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50421c7cd0e8d3cfe4f3efd99d4c16603af5bcb5308213402dfe1d65d99507e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50421c7cd0e8d3cfe4f3efd99d4c16603af5bcb5308213402dfe1d65d99507e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50421c7cd0e8d3cfe4f3efd99d4c16603af5bcb5308213402dfe1d65d99507e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50421c7cd0e8d3cfe4f3efd99d4c16603af5bcb5308213402dfe1d65d99507e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:58:33 compute-0 podman[230151]: 2026-01-26 09:58:33.019363201 +0000 UTC m=+0.135578241 container init 1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 09:58:33 compute-0 podman[230151]: 2026-01-26 09:58:33.028869125 +0000 UTC m=+0.145084135 container start 1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 09:58:33 compute-0 podman[230151]: 2026-01-26 09:58:33.032606084 +0000 UTC m=+0.148821094 container attach 1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 09:58:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 09:58:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 09:58:33 compute-0 sudo[230359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwcqypoxubfhuzflawjxtarmgovnfpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421513.1045656-162-69709046048980/AnsiballZ_command.py'
Jan 26 09:58:33 compute-0 sudo[230359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:33.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:33 compute-0 lvm[230369]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:58:33 compute-0 lvm[230369]: VG ceph_vg0 finished
Jan 26 09:58:33 compute-0 nifty_snyder[230167]: {}
Jan 26 09:58:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:58:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:33 compute-0 python3.9[230363]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:58:33 compute-0 systemd[1]: libpod-1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af.scope: Deactivated successfully.
Jan 26 09:58:33 compute-0 systemd[1]: libpod-1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af.scope: Consumed 1.068s CPU time.
Jan 26 09:58:33 compute-0 conmon[230167]: conmon 1fe9987cfe2139cd9bec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af.scope/container/memory.events
Jan 26 09:58:33 compute-0 podman[230151]: 2026-01-26 09:58:33.738787988 +0000 UTC m=+0.855003028 container died 1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_snyder, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 09:58:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:33 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:58:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:33 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:58:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:33 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:58:33 compute-0 sudo[230359]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-50421c7cd0e8d3cfe4f3efd99d4c16603af5bcb5308213402dfe1d65d99507e6-merged.mount: Deactivated successfully.
Jan 26 09:58:33 compute-0 podman[230151]: 2026-01-26 09:58:33.83529543 +0000 UTC m=+0.951510430 container remove 1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:58:33 compute-0 systemd[1]: libpod-conmon-1fe9987cfe2139cd9bec26592d85f21ccdf2c3015ff47b2f73b2e6d379b374af.scope: Deactivated successfully.
Jan 26 09:58:33 compute-0 sudo[229945]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:58:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:58:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:33 compute-0 sudo[230412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:58:33 compute-0 sudo[230412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:33 compute-0 sudo[230412]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:34 compute-0 sudo[230564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvnxgzaobukluayxlshwcntykutavvrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421514.1984806-192-173287628580573/AnsiballZ_stat.py'
Jan 26 09:58:34 compute-0 sudo[230564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:34 compute-0 ceph-mon[74456]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 425 B/s wr, 1 op/s
Jan 26 09:58:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:58:34 compute-0 python3.9[230566]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:58:34 compute-0 sudo[230564]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 765 B/s wr, 3 op/s
Jan 26 09:58:35 compute-0 sudo[230716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcfgoskbvgpucwkezvfgjaezedwghlnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421514.969618-216-133969001107362/AnsiballZ_command.py'
Jan 26 09:58:35 compute-0 sudo[230716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:35 compute-0 python3.9[230718]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:58:35 compute-0 sudo[230716]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:36 compute-0 sudo[230869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-easykybnztxmdjvpcjiantiyvhlsidaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421515.7449653-240-91734845532693/AnsiballZ_stat.py'
Jan 26 09:58:36 compute-0 sudo[230869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:36 compute-0 python3.9[230871]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:58:36 compute-0 sudo[230869]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:36 compute-0 ceph-mon[74456]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 765 B/s wr, 3 op/s
Jan 26 09:58:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:58:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:36] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:58:36 compute-0 sudo[230994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avksrndfnocrbaajsedfxwrqtcavnlls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421515.7449653-240-91734845532693/AnsiballZ_copy.py'
Jan 26 09:58:36 compute-0 sudo[230994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 680 B/s wr, 2 op/s
Jan 26 09:58:36 compute-0 python3.9[230996]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421515.7449653-240-91734845532693/.source.iscsi _original_basename=.0pv2loh5 follow=False checksum=a27bc121f9b5197ff948e17d06344ee549fab28b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:37 compute-0 sudo[230994]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:37.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:58:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:37 compute-0 ceph-mon[74456]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 680 B/s wr, 2 op/s
Jan 26 09:58:37 compute-0 sudo[231146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flrcdxibsesadpggnnwecvskmdmeomhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421517.2380052-285-272853489865844/AnsiballZ_file.py'
Jan 26 09:58:37 compute-0 sudo[231146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:38 compute-0 python3.9[231148]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:38 compute-0 sudo[231146]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:38 compute-0 sudo[231300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccdunkqimwojqvwfmpcjxzjfodygkoaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421518.3333833-309-80523477080262/AnsiballZ_lineinfile.py'
Jan 26 09:58:38 compute-0 sudo[231300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:38 compute-0 sudo[231303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:58:38 compute-0 sudo[231303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:38 compute-0 sudo[231303]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:38 compute-0 python3.9[231302]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 680 B/s wr, 2 op/s
Jan 26 09:58:38 compute-0 sudo[231300]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:58:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:39 : epoch 69773abe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:58:40 compute-0 ceph-mon[74456]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 680 B/s wr, 2 op/s
Jan 26 09:58:40 compute-0 podman[231439]: 2026-01-26 09:58:40.20810292 +0000 UTC m=+0.136353905 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 09:58:40 compute-0 sudo[231516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilyxfvyichdbfkvxuooyiujnuqgimuwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421519.4429994-336-97544331871422/AnsiballZ_systemd_service.py'
Jan 26 09:58:40 compute-0 sudo[231516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:40 compute-0 python3.9[231518]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:58:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:40 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f166c000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:40 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 26 09:58:40 compute-0 sudo[231516]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 936 B/s wr, 3 op/s
Jan 26 09:58:41 compute-0 sudo[231677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiqussmxjlfirkynhxhbsdeonqdwuezx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421520.82681-360-71238975415573/AnsiballZ_systemd_service.py'
Jan 26 09:58:41 compute-0 sudo[231677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:41 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c0016e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:41 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:41.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:41 compute-0 python3.9[231679]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:58:41 compute-0 systemd[1]: Reloading.
Jan 26 09:58:41 compute-0 systemd-rc-local-generator[231711]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:58:41 compute-0 systemd-sysv-generator[231716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:58:42 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 09:58:42 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 26 09:58:42 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 26 09:58:42 compute-0 systemd[1]: Started Open-iSCSI.
Jan 26 09:58:42 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 26 09:58:42 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 26 09:58:42 compute-0 sudo[231677]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095842 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:58:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:42 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:42 compute-0 ceph-mon[74456]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 936 B/s wr, 3 op/s
Jan 26 09:58:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:42 : epoch 69773abe : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:58:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:42 : epoch 69773abe : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:58:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 595 B/s wr, 2 op/s
Jan 26 09:58:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:43 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:43 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:43.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:43 compute-0 python3.9[231879]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:58:43 compute-0 network[231896]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:58:43 compute-0 network[231897]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:58:43 compute-0 network[231898]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:58:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:43.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:43 compute-0 ceph-mon[74456]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 595 B/s wr, 2 op/s
Jan 26 09:58:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:44 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16540016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 26 09:58:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:45 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16480016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:45 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:45.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:45.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:45 : epoch 69773abe : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:58:46 compute-0 ceph-mon[74456]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 26 09:58:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:46 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:58:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:46] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:58:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:58:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:47.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:58:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:47 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16540016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:47 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16480016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 09:58:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:47.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 09:58:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:47.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:48 compute-0 ceph-mon[74456]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:58:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:48 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:58:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:58:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:58:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:58:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:58:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:58:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:58:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:58:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:49 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:49 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16540016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:49.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:58:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:49.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095850 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:58:50 compute-0 ceph-mon[74456]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:58:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:50 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16540016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:50 compute-0 sudo[232176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvprrwktgkuwforpyleaypewyjycevsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421530.4823546-429-23878918994261/AnsiballZ_dnf.py'
Jan 26 09:58:50 compute-0 sudo[232176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:58:51 compute-0 python3.9[232178]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:58:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:51 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:51 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 09:58:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:51.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 09:58:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:51.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:52 compute-0 ceph-mon[74456]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:58:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:52 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16540016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.962104) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421532962134, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1448, "num_deletes": 255, "total_data_size": 2742349, "memory_usage": 2793360, "flush_reason": "Manual Compaction"}
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421532976495, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2675326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17670, "largest_seqno": 19117, "table_properties": {"data_size": 2668688, "index_size": 3773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13136, "raw_average_key_size": 18, "raw_value_size": 2655560, "raw_average_value_size": 3804, "num_data_blocks": 169, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769421388, "oldest_key_time": 1769421388, "file_creation_time": 1769421532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14429 microseconds, and 5900 cpu microseconds.
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.976529) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2675326 bytes OK
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.976547) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.977982) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.977993) EVENT_LOG_v1 {"time_micros": 1769421532977989, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.978007) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2736218, prev total WAL file size 2736218, number of live WAL files 2.
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.978634) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2612KB)], [38(11MB)]
Jan 26 09:58:52 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421532978663, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14277032, "oldest_snapshot_seqno": -1}
Jan 26 09:58:53 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:58:53 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:58:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:53 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648002720 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:53 compute-0 systemd[1]: Reloading.
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4999 keys, 13831543 bytes, temperature: kUnknown
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421533292780, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13831543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13796163, "index_size": 21766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126745, "raw_average_key_size": 25, "raw_value_size": 13703730, "raw_average_value_size": 2741, "num_data_blocks": 897, "num_entries": 4999, "num_filter_entries": 4999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769421532, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 09:58:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:53 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.293841) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13831543 bytes
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.331305) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.3 rd, 43.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.1 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(10.5) write-amplify(5.2) OK, records in: 5523, records dropped: 524 output_compression: NoCompression
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.331345) EVENT_LOG_v1 {"time_micros": 1769421533331330, "job": 18, "event": "compaction_finished", "compaction_time_micros": 315010, "compaction_time_cpu_micros": 27046, "output_level": 6, "num_output_files": 1, "total_output_size": 13831543, "num_input_records": 5523, "num_output_records": 4999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421533332051, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 26 09:58:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:53.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421533334846, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:52.978547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.334939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.334945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.334946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.334948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:58:53 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-09:58:53.334949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 09:58:53 compute-0 systemd-rc-local-generator[232227]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:58:53 compute-0 systemd-sysv-generator[232230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:58:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:58:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:58:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:58:54 compute-0 systemd[1]: run-rd353b9eb77a9474db58bf3e4201eb7e2.service: Deactivated successfully.
Jan 26 09:58:54 compute-0 sudo[232176]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:54 compute-0 ceph-mon[74456]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 26 09:58:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:54 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650002f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:58:54.681 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:58:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:58:54.682 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:58:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:58:54.682 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:58:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 26 09:58:54 compute-0 sudo[232496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvrlgavlvyfgwazavpeanfrywdounvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421534.7097611-456-237854067227574/AnsiballZ_file.py'
Jan 26 09:58:54 compute-0 sudo[232496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:55 compute-0 python3.9[232498]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 09:58:55 compute-0 sudo[232496]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:55 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16540032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:55 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648002720 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:55.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:58:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:55.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:58:55 compute-0 ceph-mon[74456]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 26 09:58:56 compute-0 sudo[232650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwglwfteoehlupcknkuubeonykardcnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421535.5871837-480-32178927229318/AnsiballZ_modprobe.py'
Jan 26 09:58:56 compute-0 sudo[232650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:56 compute-0 sshd-session[232523]: Invalid user oracle from 157.245.76.178 port 42192
Jan 26 09:58:56 compute-0 python3.9[232652]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 26 09:58:56 compute-0 sshd-session[232523]: Connection closed by invalid user oracle 157.245.76.178 port 42192 [preauth]
Jan 26 09:58:56 compute-0 sudo[232650]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:56 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:56] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 26 09:58:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:58:56] "GET /metrics HTTP/1.1" 200 48276 "" "Prometheus/2.51.0"
Jan 26 09:58:56 compute-0 sudo[232808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgiyzbfnqewbehtlcaovnajluaihzckv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421536.5716314-504-220282636876724/AnsiballZ_stat.py'
Jan 26 09:58:56 compute-0 sudo[232808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:58:57 compute-0 python3.9[232810]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:58:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:57.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 09:58:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:57.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:58:57 compute-0 sudo[232808]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:58:57.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:58:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:57 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16540032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:57 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.003000094s ======
Jan 26 09:58:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:57.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000094s
Jan 26 09:58:57 compute-0 sudo[232931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jccnggggjrjphjhrrqyuzyqhkcockhqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421536.5716314-504-220282636876724/AnsiballZ_copy.py'
Jan 26 09:58:57 compute-0 sudo[232931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:57.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:57 compute-0 python3.9[232933]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421536.5716314-504-220282636876724/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:57 compute-0 sudo[232931]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:58:58 compute-0 ceph-mon[74456]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:58:58 compute-0 sudo[233102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akihpxiaivqxwjmilqcrecnneuyhbpsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421538.253353-552-54382956063674/AnsiballZ_lineinfile.py'
Jan 26 09:58:58 compute-0 sudo[233102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:58 compute-0 podman[233059]: 2026-01-26 09:58:58.553531985 +0000 UTC m=+0.066712472 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 09:58:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:58 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648003430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:58 compute-0 python3.9[233107]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:58:58 compute-0 sudo[233102]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:58:59 compute-0 sudo[233132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:58:59 compute-0 sudo[233132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:58:59 compute-0 sudo[233132]: pam_unix(sudo:session): session closed for user root
Jan 26 09:58:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:59 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:58:59 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:58:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:58:59.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:58:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:58:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:58:59.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:58:59 compute-0 sudo[233282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsllegoeohmuojrhbhlexhxznjeggdsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421539.0354714-576-197765473535616/AnsiballZ_systemd.py'
Jan 26 09:58:59 compute-0 sudo[233282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:58:59 compute-0 python3.9[233284]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:59:00 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 09:59:00 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 26 09:59:00 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 26 09:59:00 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 09:59:00 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 09:59:00 compute-0 sudo[233282]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:00 compute-0 ceph-mon[74456]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:59:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:00 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:00 compute-0 sudo[233440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxrbudiquptaebzacllbuzoahkkszxhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421540.3801916-600-227129481022261/AnsiballZ_command.py'
Jan 26 09:59:00 compute-0 sudo[233440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:00 compute-0 python3.9[233442]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:59:00 compute-0 sudo[233440]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:59:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:01 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648003430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:01 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:01.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:01 compute-0 sudo[233593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqpakusbnjwmhsjrclivukcffvzhkqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421541.4791572-630-181671220064207/AnsiballZ_stat.py'
Jan 26 09:59:01 compute-0 sudo[233593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:02 compute-0 python3.9[233595]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:59:02 compute-0 sudo[233593]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:02 compute-0 ceph-mon[74456]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:59:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:02 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:02 compute-0 sudo[233747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxffokqcfqykybsyafiiprpjwxtuzmjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421542.6258438-657-114535955215668/AnsiballZ_stat.py'
Jan 26 09:59:02 compute-0 sudo[233747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:03 compute-0 python3.9[233749]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:59:03 compute-0 sudo[233747]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:03 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:03 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648003430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:03 compute-0 sudo[233870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqptucfepslhxagmkzklltchdgkaqxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421542.6258438-657-114535955215668/AnsiballZ_copy.py'
Jan 26 09:59:03 compute-0 sudo[233870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:59:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:03 compute-0 python3.9[233872]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421542.6258438-657-114535955215668/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:03 compute-0 sudo[233870]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:04 compute-0 sudo[234024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trohpoyztkoccphdwdsrjcfhhxetghfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421544.1449068-702-15677832517504/AnsiballZ_command.py'
Jan 26 09:59:04 compute-0 sudo[234024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:04 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:04 compute-0 python3.9[234026]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:59:04 compute-0 sudo[234024]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:04 compute-0 ceph-mon[74456]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:05 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:05 compute-0 sudo[234177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbesagajbixduzjspycyvxnrzwynnpke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421544.9898522-726-274476592229720/AnsiballZ_lineinfile.py'
Jan 26 09:59:05 compute-0 sudo[234177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:05 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:05.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:05 compute-0 python3.9[234179]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:05 compute-0 sudo[234177]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:05.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:05 compute-0 ceph-mon[74456]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 26 09:59:06 compute-0 sudo[234331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ualznuwclfufifjaleyihyloiojumnkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421545.821284-750-271470074913342/AnsiballZ_replace.py'
Jan 26 09:59:06 compute-0 sudo[234331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:06 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648003430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:59:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:59:06 compute-0 python3.9[234333]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:06 compute-0 sudo[234331]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:07.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:59:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:07.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:59:07 compute-0 sudo[234483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghzrxafwlixlbyzoobviwhjtdjakgtwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421546.9109206-774-175957469786032/AnsiballZ_replace.py'
Jan 26 09:59:07 compute-0 sudo[234483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:07 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:07 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:07.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:07 compute-0 python3.9[234485]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:07 compute-0 sudo[234483]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:07.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:08 compute-0 sudo[234635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdbdafuswkijbxhvbyakvlxomlqgcuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421547.8152068-801-43661535272921/AnsiballZ_lineinfile.py'
Jan 26 09:59:08 compute-0 sudo[234635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:08 compute-0 ceph-mon[74456]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:08 compute-0 python3.9[234637]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:08 compute-0 sudo[234635]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:08 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:08 compute-0 sudo[234789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilpharrrmrqplzzqmcmijjfgdqedmcle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421548.4174135-801-95752481801543/AnsiballZ_lineinfile.py'
Jan 26 09:59:08 compute-0 sudo[234789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:08 compute-0 python3.9[234791]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:08 compute-0 sudo[234789]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:09 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648003430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:09 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:59:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:09.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:59:09 compute-0 sudo[234941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezdojjbbcivfsngoqclsbppwhhesubdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421549.075901-801-142118231646971/AnsiballZ_lineinfile.py'
Jan 26 09:59:09 compute-0 sudo[234941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:09 compute-0 python3.9[234943]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 09:59:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:09.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 09:59:09 compute-0 sudo[234941]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:10 compute-0 sudo[235093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waeboiesjkncycnakgdpgemldlqbbint ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421549.7866611-801-271466029602804/AnsiballZ_lineinfile.py'
Jan 26 09:59:10 compute-0 sudo[235093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:10 compute-0 python3.9[235095]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:10 compute-0 sudo[235093]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:10 compute-0 ceph-mon[74456]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:10 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:59:11 compute-0 podman[235197]: 2026-01-26 09:59:11.195956301 +0000 UTC m=+0.117860025 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Jan 26 09:59:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:11 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1650003c60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:11 compute-0 sudo[235271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmcgecwfthfskdsrvasxifsrqbcurfvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421550.8442192-888-42324580210342/AnsiballZ_stat.py'
Jan 26 09:59:11 compute-0 sudo[235271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:11 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648003430 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:11.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:11 compute-0 python3.9[235273]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 09:59:11 compute-0 sudo[235271]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:11.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:11 compute-0 sudo[235425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slppzlowwsanrbczukpzudyadujvgjao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421551.700418-912-254863969172201/AnsiballZ_command.py'
Jan 26 09:59:11 compute-0 sudo[235425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:12 compute-0 python3.9[235427]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 09:59:12 compute-0 sudo[235425]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:12 compute-0 ceph-mon[74456]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:59:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:12 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:12 compute-0 sudo[235582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdaxuuzsbqfdsobztytznnvsdnclvbva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421552.6324368-939-175200694335345/AnsiballZ_systemd_service.py'
Jan 26 09:59:12 compute-0 sudo[235582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:13 compute-0 python3.9[235584]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:13 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:13 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 26 09:59:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:13 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:13 compute-0 sudo[235582]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 09:59:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:13.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 09:59:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:13 compute-0 sudo[235738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjzeoocmtfwgaraauvjlngulecapfaxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421553.6683838-963-18462798452080/AnsiballZ_systemd_service.py'
Jan 26 09:59:13 compute-0 sudo[235738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:14 compute-0 ceph-mon[74456]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:14 compute-0 python3.9[235740]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:14 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 26 09:59:14 compute-0 udevadm[235746]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 26 09:59:14 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 26 09:59:14 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 09:59:14 compute-0 multipathd[235750]: --------start up--------
Jan 26 09:59:14 compute-0 multipathd[235750]: read /etc/multipath.conf
Jan 26 09:59:14 compute-0 multipathd[235750]: path checkers start up
Jan 26 09:59:14 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 09:59:14 compute-0 sudo[235738]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:14 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1648004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:15 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:15 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:15.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:15.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:15 compute-0 sudo[235907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlblyumgbbcgmhnaqyudxqbpqjvlwjwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421555.4693847-999-224849501032903/AnsiballZ_file.py'
Jan 26 09:59:15 compute-0 sudo[235907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:15 compute-0 python3.9[235909]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 09:59:16 compute-0 sudo[235907]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:16 compute-0 ceph-mon[74456]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:16 compute-0 sudo[236061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvcjbmmqiaqilupewsddfjdhutaksbkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421556.2115924-1023-260208119481958/AnsiballZ_modprobe.py'
Jan 26 09:59:16 compute-0 sudo[236061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:16 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:59:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 09:59:16 compute-0 python3.9[236063]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 26 09:59:16 compute-0 kernel: Key type psk registered
Jan 26 09:59:16 compute-0 sudo[236061]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:17.057Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:59:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:17.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:59:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:17 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1654003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:17 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:17 compute-0 sudo[236224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqsaokvqxskqcbrcodkikpurxivhsuuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421557.1056147-1047-26199992853208/AnsiballZ_stat.py'
Jan 26 09:59:17 compute-0 sudo[236224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:17.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:17 compute-0 python3.9[236226]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 09:59:17 compute-0 sudo[236224]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:18 compute-0 sudo[236347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaskavvgqsfmbnxsryntkqcogehcptsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421557.1056147-1047-26199992853208/AnsiballZ_copy.py'
Jan 26 09:59:18 compute-0 sudo[236347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:18 compute-0 python3.9[236349]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769421557.1056147-1047-26199992853208/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:18 compute-0 sudo[236347]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:18 compute-0 kernel: ganesha.nfsd[231410]: segfault at 50 ip 00007f16f5af432e sp 00007f166a7fb210 error 4 in libntirpc.so.5.8[7f16f5ad9000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 26 09:59:18 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 09:59:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[229076]: 26/01/2026 09:59:18 : epoch 69773abe : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f165c002000 fd 42 proxy ignored for local
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_09:59:18
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', '.nfs', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 09:59:18 compute-0 systemd[1]: Started Process Core Dump (PID 236376/UID 0).
Jan 26 09:59:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:59:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 09:59:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:19 compute-0 sudo[236526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawhlcucqjwwfvkexzowtpoweqtplfmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421558.80845-1095-95709515086178/AnsiballZ_lineinfile.py'
Jan 26 09:59:19 compute-0 sudo[236526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:19 compute-0 sudo[236477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:59:19 compute-0 sudo[236477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:19 compute-0 sudo[236477]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:19 compute-0 ceph-mon[74456]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:19.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:19 compute-0 python3.9[236529]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:19 compute-0 sudo[236526]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:19.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:19 compute-0 systemd-coredump[236377]: Process 229080 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 46:
                                                    #0  0x00007f16f5af432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 09:59:19 compute-0 systemd[1]: systemd-coredump@10-236376-0.service: Deactivated successfully.
Jan 26 09:59:19 compute-0 systemd[1]: systemd-coredump@10-236376-0.service: Consumed 1.289s CPU time.
Jan 26 09:59:20 compute-0 sudo[236685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caxheklykadogmnogustqvpozefplkck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421559.7615943-1119-61901424018389/AnsiballZ_systemd.py'
Jan 26 09:59:20 compute-0 sudo[236685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:20 compute-0 podman[236684]: 2026-01-26 09:59:20.046347455 +0000 UTC m=+0.027873570 container died 2c85436f6539f346b1fec68746c76935048c020698a8d26f072ed09526303db5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f657b7fc7c63d31bbbf0b0479dc16d94aedcd853a9647d9c2556063905fa89cb-merged.mount: Deactivated successfully.
Jan 26 09:59:20 compute-0 podman[236684]: 2026-01-26 09:59:20.111562243 +0000 UTC m=+0.093088348 container remove 2c85436f6539f346b1fec68746c76935048c020698a8d26f072ed09526303db5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 09:59:20 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 09:59:20 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 09:59:20 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.570s CPU time.
Jan 26 09:59:20 compute-0 python3.9[236692]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:59:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:20 compute-0 ceph-mon[74456]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:59:21 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 09:59:21 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 26 09:59:21 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 26 09:59:21 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 09:59:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:21 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 09:59:21 compute-0 sudo[236685]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000077s ======
Jan 26 09:59:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:21.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000077s
Jan 26 09:59:22 compute-0 sudo[236883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xucpciivbcmvtnezvnkgypshaicysrmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421561.6806846-1143-102020744918191/AnsiballZ_dnf.py'
Jan 26 09:59:22 compute-0 sudo[236883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:22 compute-0 ceph-mon[74456]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 09:59:22 compute-0 python3.9[236885]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 09:59:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:23.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:23.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:24 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 26 09:59:24 compute-0 ceph-mon[74456]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:24 compute-0 systemd[1]: Reloading.
Jan 26 09:59:24 compute-0 systemd-rc-local-generator[236919]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:59:24 compute-0 systemd-sysv-generator[236924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:59:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095924 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 09:59:24 compute-0 systemd[1]: Reloading.
Jan 26 09:59:24 compute-0 systemd-rc-local-generator[236954]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:59:24 compute-0 systemd-sysv-generator[236958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:59:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:25 compute-0 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 26 09:59:25 compute-0 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 26 09:59:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:25.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:25 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 09:59:25 compute-0 lvm[237003]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:59:25 compute-0 lvm[237003]: VG ceph_vg0 finished
Jan 26 09:59:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 09:59:25 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 09:59:25 compute-0 systemd[1]: Reloading.
Jan 26 09:59:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:25.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:25 compute-0 systemd-rc-local-generator[237058]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:59:25 compute-0 systemd-sysv-generator[237063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:59:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 09:59:26 compute-0 ceph-mon[74456]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:26 compute-0 sudo[236883]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:26] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:59:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:26] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 09:59:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:27.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:59:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 09:59:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 09:59:27 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.474s CPU time.
Jan 26 09:59:27 compute-0 systemd[1]: run-r3bf0647d815a4d84bd2cae5b614ed260.service: Deactivated successfully.
Jan 26 09:59:27 compute-0 sudo[238359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syzyeqkmvzyucthjkbvaideodixhojwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421567.046483-1167-272414156779206/AnsiballZ_systemd_service.py'
Jan 26 09:59:27 compute-0 sudo[238359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000076s ======
Jan 26 09:59:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:27.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000076s
Jan 26 09:59:27 compute-0 python3.9[238361]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:59:27 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 26 09:59:27 compute-0 iscsid[231720]: iscsid shutting down.
Jan 26 09:59:27 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 26 09:59:27 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 26 09:59:27 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 09:59:27 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 26 09:59:27 compute-0 systemd[1]: Started Open-iSCSI.
Jan 26 09:59:27 compute-0 sudo[238359]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:28 compute-0 ceph-mon[74456]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:28 compute-0 sudo[238517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijapmlfssjseegeoiagixjbjequepnnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421568.168303-1191-70940192841447/AnsiballZ_systemd_service.py'
Jan 26 09:59:28 compute-0 sudo[238517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:28 compute-0 python3.9[238519]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 09:59:28 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 26 09:59:28 compute-0 multipathd[235750]: exit (signal)
Jan 26 09:59:28 compute-0 multipathd[235750]: --------shut down-------
Jan 26 09:59:28 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 26 09:59:28 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 26 09:59:28 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 09:59:28 compute-0 podman[238521]: 2026-01-26 09:59:28.858886471 +0000 UTC m=+0.061128574 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 09:59:28 compute-0 multipathd[238544]: --------start up--------
Jan 26 09:59:28 compute-0 multipathd[238544]: read /etc/multipath.conf
Jan 26 09:59:28 compute-0 multipathd[238544]: path checkers start up
Jan 26 09:59:28 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 09:59:28 compute-0 sudo[238517]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:30 compute-0 python3.9[238701]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 09:59:30 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 11.
Jan 26 09:59:30 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:59:30 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.570s CPU time.
Jan 26 09:59:30 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 09:59:30 compute-0 ceph-mon[74456]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:30 compute-0 podman[238780]: 2026-01-26 09:59:30.506449289 +0000 UTC m=+0.042933261 container create 8298fb22e0040193cca53081e1416924e318bb3b793d38d94cdf8b0ddecaa55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeead654aef501917d4d6d8751252b3cc8d8a703d844ee8a03c8273b8d29a8de/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeead654aef501917d4d6d8751252b3cc8d8a703d844ee8a03c8273b8d29a8de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeead654aef501917d4d6d8751252b3cc8d8a703d844ee8a03c8273b8d29a8de/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeead654aef501917d4d6d8751252b3cc8d8a703d844ee8a03c8273b8d29a8de/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:30 compute-0 podman[238780]: 2026-01-26 09:59:30.568024997 +0000 UTC m=+0.104509019 container init 8298fb22e0040193cca53081e1416924e318bb3b793d38d94cdf8b0ddecaa55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 09:59:30 compute-0 podman[238780]: 2026-01-26 09:59:30.572955097 +0000 UTC m=+0.109439089 container start 8298fb22e0040193cca53081e1416924e318bb3b793d38d94cdf8b0ddecaa55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 09:59:30 compute-0 bash[238780]: 8298fb22e0040193cca53081e1416924e318bb3b793d38d94cdf8b0ddecaa55e
Jan 26 09:59:30 compute-0 podman[238780]: 2026-01-26 09:59:30.487493338 +0000 UTC m=+0.023977370 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:59:30 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 09:59:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:30 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 09:59:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:59:31 compute-0 sudo[238962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcjntjqjhctbllysnevcaqodllrujdaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421570.7461176-1243-184139257583954/AnsiballZ_file.py'
Jan 26 09:59:31 compute-0 sudo[238962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:31 compute-0 python3.9[238964]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:31 compute-0 sudo[238962]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:31.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:32 compute-0 sudo[239114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdkufzfalmimjazreyjaaxgfyyebxysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421571.9168532-1276-241647568076718/AnsiballZ_systemd_service.py'
Jan 26 09:59:32 compute-0 sudo[239114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:32 compute-0 ceph-mon[74456]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 09:59:32 compute-0 python3.9[239116]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 09:59:32 compute-0 systemd[1]: Reloading.
Jan 26 09:59:32 compute-0 systemd-rc-local-generator[239144]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 09:59:32 compute-0 systemd-sysv-generator[239147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 09:59:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:59:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:33 compute-0 sudo[239114]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000076s ======
Jan 26 09:59:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:33.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000076s
Jan 26 09:59:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:59:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:33 compute-0 python3.9[239303]: ansible-ansible.builtin.service_facts Invoked
Jan 26 09:59:33 compute-0 network[239320]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 09:59:33 compute-0 network[239321]: 'network-scripts' will be removed from distribution in near future.
Jan 26 09:59:33 compute-0 network[239322]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 09:59:34 compute-0 ceph-mon[74456]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 09:59:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:34 compute-0 sudo[239328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:59:34 compute-0 sudo[239328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:34 compute-0 sudo[239328]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:34 compute-0 sudo[239355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 26 09:59:34 compute-0 sudo[239355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:59:35 compute-0 sudo[239355]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:35 compute-0 sudo[239399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:59:35 compute-0 sudo[239399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:35 compute-0 sudo[239399]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:35 compute-0 sudo[239424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 09:59:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:35.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:35 compute-0 sudo[239424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:35 compute-0 sudo[239424]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:59:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 09:59:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:59:36 compute-0 sudo[239493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:59:36 compute-0 sudo[239493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:36 compute-0 sudo[239493]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:36 compute-0 sudo[239522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 09:59:36 compute-0 sudo[239522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:36 compute-0 ceph-mon[74456]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 09:59:36 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 09:59:36 compute-0 podman[239607]: 2026-01-26 09:59:36.597018503 +0000 UTC m=+0.068594570 container create b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_nightingale, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:59:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:59:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:59:36 compute-0 systemd[1]: Started libpod-conmon-b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6.scope.
Jan 26 09:59:36 compute-0 podman[239607]: 2026-01-26 09:59:36.566885529 +0000 UTC m=+0.038461686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:59:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:59:36 compute-0 podman[239607]: 2026-01-26 09:59:36.707085651 +0000 UTC m=+0.178661798 container init b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 09:59:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:36 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 09:59:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:36 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 09:59:36 compute-0 podman[239607]: 2026-01-26 09:59:36.71706376 +0000 UTC m=+0.188639867 container start b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 09:59:36 compute-0 podman[239607]: 2026-01-26 09:59:36.722272051 +0000 UTC m=+0.193848128 container attach b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:59:36 compute-0 sad_nightingale[239628]: 167 167
Jan 26 09:59:36 compute-0 podman[239607]: 2026-01-26 09:59:36.724081891 +0000 UTC m=+0.195657958 container died b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_nightingale, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 09:59:36 compute-0 systemd[1]: libpod-b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6.scope: Deactivated successfully.
Jan 26 09:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab18a792fc56db0869c145b1875927c7559e08c57e3766600fff6adc42550a2-merged.mount: Deactivated successfully.
Jan 26 09:59:36 compute-0 podman[239607]: 2026-01-26 09:59:36.761327992 +0000 UTC m=+0.232904059 container remove b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_nightingale, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:59:36 compute-0 systemd[1]: libpod-conmon-b505390d8334bbacf12bb276c2e33cc6d63914b6d903190d6c84d4ec660d01b6.scope: Deactivated successfully.
Jan 26 09:59:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:59:36 compute-0 podman[239653]: 2026-01-26 09:59:36.983178919 +0000 UTC m=+0.051620462 container create f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 09:59:37 compute-0 systemd[1]: Started libpod-conmon-f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f.scope.
Jan 26 09:59:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:59:37 compute-0 podman[239653]: 2026-01-26 09:59:36.955803358 +0000 UTC m=+0.024244911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c4d6f8e367e9c9e0d46ef2653c32c94344573be576ef1797afbbecf944564b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c4d6f8e367e9c9e0d46ef2653c32c94344573be576ef1797afbbecf944564b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c4d6f8e367e9c9e0d46ef2653c32c94344573be576ef1797afbbecf944564b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c4d6f8e367e9c9e0d46ef2653c32c94344573be576ef1797afbbecf944564b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c4d6f8e367e9c9e0d46ef2653c32c94344573be576ef1797afbbecf944564b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:37.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:59:37 compute-0 podman[239653]: 2026-01-26 09:59:37.074801013 +0000 UTC m=+0.143242536 container init f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 09:59:37 compute-0 podman[239653]: 2026-01-26 09:59:37.088730467 +0000 UTC m=+0.157171970 container start f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lumiere, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 09:59:37 compute-0 podman[239653]: 2026-01-26 09:59:37.091998859 +0000 UTC m=+0.160440412 container attach f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:59:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:37.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:37 compute-0 great_lumiere[239669]: --> passed data devices: 0 physical, 1 LVM
Jan 26 09:59:37 compute-0 great_lumiere[239669]: --> All data devices are unavailable
Jan 26 09:59:37 compute-0 systemd[1]: libpod-f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f.scope: Deactivated successfully.
Jan 26 09:59:37 compute-0 podman[239653]: 2026-01-26 09:59:37.442416679 +0000 UTC m=+0.510858182 container died f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:59:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-82c4d6f8e367e9c9e0d46ef2653c32c94344573be576ef1797afbbecf944564b-merged.mount: Deactivated successfully.
Jan 26 09:59:37 compute-0 podman[239653]: 2026-01-26 09:59:37.485629801 +0000 UTC m=+0.554071304 container remove f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_lumiere, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 09:59:37 compute-0 systemd[1]: libpod-conmon-f075bfb459703d1532db4f3f41a4afa5d5b465e49c3cc20d3ad5b6bd555e055f.scope: Deactivated successfully.
Jan 26 09:59:37 compute-0 sudo[239522]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:37 compute-0 sudo[239695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:59:37 compute-0 sudo[239695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:37 compute-0 sudo[239695]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:37.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:37 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 09:59:37 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 26 09:59:37 compute-0 sudo[239720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 09:59:37 compute-0 sudo[239720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:38 compute-0 podman[239802]: 2026-01-26 09:59:38.028438725 +0000 UTC m=+0.043662838 container create 7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 09:59:38 compute-0 systemd[1]: Started libpod-conmon-7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659.scope.
Jan 26 09:59:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:59:38 compute-0 podman[239802]: 2026-01-26 09:59:38.01073659 +0000 UTC m=+0.025960733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:59:38 compute-0 podman[239802]: 2026-01-26 09:59:38.118048674 +0000 UTC m=+0.133272807 container init 7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:59:38 compute-0 podman[239802]: 2026-01-26 09:59:38.125782131 +0000 UTC m=+0.141006254 container start 7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_engelbart, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 09:59:38 compute-0 podman[239802]: 2026-01-26 09:59:38.129396329 +0000 UTC m=+0.144620442 container attach 7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_engelbart, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:59:38 compute-0 peaceful_engelbart[239823]: 167 167
Jan 26 09:59:38 compute-0 podman[239802]: 2026-01-26 09:59:38.131832137 +0000 UTC m=+0.147056250 container died 7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_engelbart, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 09:59:38 compute-0 systemd[1]: libpod-7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659.scope: Deactivated successfully.
Jan 26 09:59:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cccc3e708d1fd385e0b9704684cb1865fa29178320f19c73578e5eb529ed74e-merged.mount: Deactivated successfully.
Jan 26 09:59:38 compute-0 podman[239802]: 2026-01-26 09:59:38.162654024 +0000 UTC m=+0.177878137 container remove 7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 09:59:38 compute-0 systemd[1]: libpod-conmon-7e1e3b00ad9c00b216776f7beb852f0cf931bd4afed160d60f25a679b27c4659.scope: Deactivated successfully.
Jan 26 09:59:38 compute-0 ceph-mon[74456]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:59:38 compute-0 podman[239857]: 2026-01-26 09:59:38.325822086 +0000 UTC m=+0.049980976 container create ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 09:59:38 compute-0 systemd[1]: Started libpod-conmon-ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05.scope.
Jan 26 09:59:38 compute-0 podman[239857]: 2026-01-26 09:59:38.295910299 +0000 UTC m=+0.020069209 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:59:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10166801be8abc4f13b24e685fb98cbb94428d1d8e7f1f7b62c6185f3afe1c0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10166801be8abc4f13b24e685fb98cbb94428d1d8e7f1f7b62c6185f3afe1c0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10166801be8abc4f13b24e685fb98cbb94428d1d8e7f1f7b62c6185f3afe1c0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10166801be8abc4f13b24e685fb98cbb94428d1d8e7f1f7b62c6185f3afe1c0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:38 compute-0 podman[239857]: 2026-01-26 09:59:38.412887279 +0000 UTC m=+0.137046189 container init ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 26 09:59:38 compute-0 podman[239857]: 2026-01-26 09:59:38.419490818 +0000 UTC m=+0.143649708 container start ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_maxwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:59:38 compute-0 podman[239857]: 2026-01-26 09:59:38.423339534 +0000 UTC m=+0.147498424 container attach ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_maxwell, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]: {
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:     "0": [
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:         {
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "devices": [
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "/dev/loop3"
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             ],
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "lv_name": "ceph_lv0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "lv_size": "21470642176",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "name": "ceph_lv0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "tags": {
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.cluster_name": "ceph",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.crush_device_class": "",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.encrypted": "0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.osd_id": "0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.type": "block",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.vdo": "0",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:                 "ceph.with_tpm": "0"
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             },
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "type": "block",
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:             "vg_name": "ceph_vg0"
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:         }
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]:     ]
Jan 26 09:59:38 compute-0 flamboyant_maxwell[239879]: }
Jan 26 09:59:38 compute-0 systemd[1]: libpod-ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05.scope: Deactivated successfully.
Jan 26 09:59:38 compute-0 podman[239857]: 2026-01-26 09:59:38.717071403 +0000 UTC m=+0.441230293 container died ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_maxwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 09:59:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-10166801be8abc4f13b24e685fb98cbb94428d1d8e7f1f7b62c6185f3afe1c0d-merged.mount: Deactivated successfully.
Jan 26 09:59:38 compute-0 podman[239857]: 2026-01-26 09:59:38.758960243 +0000 UTC m=+0.483119133 container remove ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 09:59:38 compute-0 systemd[1]: libpod-conmon-ae36b2d82e716e82ace5b342ad1176dc307150901e25d70dcf6e7d6d34255f05.scope: Deactivated successfully.
Jan 26 09:59:38 compute-0 sudo[239720]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:38 compute-0 sudo[239917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 09:59:38 compute-0 sudo[239917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:38 compute-0 sudo[239917]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:38 compute-0 sudo[239942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 09:59:38 compute-0 sudo[239942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:59:39 compute-0 sudo[239994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:59:39 compute-0 sudo[239994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:39 compute-0 sudo[239994]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:39 compute-0 podman[240032]: 2026-01-26 09:59:39.297263269 +0000 UTC m=+0.039426921 container create 1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 09:59:39 compute-0 systemd[1]: Started libpod-conmon-1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b.scope.
Jan 26 09:59:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:59:39 compute-0 podman[240032]: 2026-01-26 09:59:39.376398212 +0000 UTC m=+0.118561884 container init 1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 09:59:39 compute-0 podman[240032]: 2026-01-26 09:59:39.280937761 +0000 UTC m=+0.023101413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:59:39 compute-0 podman[240032]: 2026-01-26 09:59:39.383411673 +0000 UTC m=+0.125575325 container start 1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 09:59:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:39 compute-0 admiring_wu[240048]: 167 167
Jan 26 09:59:39 compute-0 podman[240032]: 2026-01-26 09:59:39.387461915 +0000 UTC m=+0.129625567 container attach 1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 09:59:39 compute-0 conmon[240048]: conmon 1886f3d9e6c8090a0281 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b.scope/container/memory.events
Jan 26 09:59:39 compute-0 systemd[1]: libpod-1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b.scope: Deactivated successfully.
Jan 26 09:59:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000077s ======
Jan 26 09:59:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:39.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000077s
Jan 26 09:59:39 compute-0 podman[240032]: 2026-01-26 09:59:39.388634135 +0000 UTC m=+0.130797777 container died 1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wu, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 09:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ad87222594eb0a4a78bfde21a3dca4ff7a7b9c238bc2ec63a5eec92388aeb94-merged.mount: Deactivated successfully.
Jan 26 09:59:39 compute-0 podman[240032]: 2026-01-26 09:59:39.427134304 +0000 UTC m=+0.169297956 container remove 1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 09:59:39 compute-0 systemd[1]: libpod-conmon-1886f3d9e6c8090a0281a2ad4060647d983b4bcf10e5cb207f0a45dc7175e19b.scope: Deactivated successfully.
Jan 26 09:59:39 compute-0 sshd-session[239985]: Invalid user oracle from 157.245.76.178 port 46646
Jan 26 09:59:39 compute-0 podman[240072]: 2026-01-26 09:59:39.602412339 +0000 UTC m=+0.037625943 container create 1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:59:39 compute-0 sshd-session[239985]: Connection closed by invalid user oracle 157.245.76.178 port 46646 [preauth]
Jan 26 09:59:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:39 compute-0 systemd[1]: Started libpod-conmon-1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba.scope.
Jan 26 09:59:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 09:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998ad2337bb9fe758377434799520af5451221a341c701c362874c86338d328d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998ad2337bb9fe758377434799520af5451221a341c701c362874c86338d328d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998ad2337bb9fe758377434799520af5451221a341c701c362874c86338d328d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998ad2337bb9fe758377434799520af5451221a341c701c362874c86338d328d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 09:59:39 compute-0 podman[240072]: 2026-01-26 09:59:39.586559597 +0000 UTC m=+0.021773221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 09:59:39 compute-0 podman[240072]: 2026-01-26 09:59:39.69476674 +0000 UTC m=+0.129980354 container init 1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 09:59:39 compute-0 podman[240072]: 2026-01-26 09:59:39.710392094 +0000 UTC m=+0.145605738 container start 1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 09:59:39 compute-0 podman[240072]: 2026-01-26 09:59:39.714618721 +0000 UTC m=+0.149832355 container attach 1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 09:59:40 compute-0 lvm[240215]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 09:59:40 compute-0 lvm[240215]: VG ceph_vg0 finished
Jan 26 09:59:40 compute-0 intelligent_swartz[240088]: {}
Jan 26 09:59:40 compute-0 ceph-mon[74456]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 09:59:40 compute-0 systemd[1]: libpod-1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba.scope: Deactivated successfully.
Jan 26 09:59:40 compute-0 systemd[1]: libpod-1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba.scope: Consumed 1.169s CPU time.
Jan 26 09:59:40 compute-0 podman[240072]: 2026-01-26 09:59:40.427934042 +0000 UTC m=+0.863147656 container died 1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_swartz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 09:59:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-998ad2337bb9fe758377434799520af5451221a341c701c362874c86338d328d-merged.mount: Deactivated successfully.
Jan 26 09:59:40 compute-0 podman[240072]: 2026-01-26 09:59:40.467150326 +0000 UTC m=+0.902363930 container remove 1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_swartz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 09:59:40 compute-0 systemd[1]: libpod-conmon-1f5cc928d795f2fbd5f84ddde2c56a8dcdc3118cddebd8841df11674b1d467ba.scope: Deactivated successfully.
Jan 26 09:59:40 compute-0 sudo[239942]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 09:59:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 09:59:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:40 compute-0 sudo[240232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 09:59:40 compute-0 sudo[240232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:40 compute-0 sudo[240232]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:59:41 compute-0 sudo[240382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdumknmbtuzqujmeibwddrxarjvynzyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421580.7331467-1333-157389657940178/AnsiballZ_systemd_service.py'
Jan 26 09:59:41 compute-0 sudo[240382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:41 compute-0 python3.9[240384]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000077s ======
Jan 26 09:59:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000077s
Jan 26 09:59:41 compute-0 sudo[240382]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:41 compute-0 podman[240386]: 2026-01-26 09:59:41.494167915 +0000 UTC m=+0.112737513 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 09:59:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 09:59:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:41.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:41 compute-0 sudo[240562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxhshtgvdizrforrntlcmpojurhcahty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421581.6004288-1333-6989776671130/AnsiballZ_systemd_service.py'
Jan 26 09:59:41 compute-0 sudo[240562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:42 compute-0 python3.9[240564]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:42 compute-0 sudo[240562]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:42 compute-0 ceph-mon[74456]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 09:59:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:42 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 09:59:42 compute-0 sudo[240717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afigrstpcgxjznjzbdrtgpgnlrouawtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421582.4514043-1333-155459069084149/AnsiballZ_systemd_service.py'
Jan 26 09:59:42 compute-0 sudo[240717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:59:43 compute-0 python3.9[240732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:43 compute-0 sudo[240717]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:43 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:43 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410001550 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:43 compute-0 sudo[240885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpcrutugfsgcyfftqutvvpyjsaegppfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421583.2399957-1333-70145292557198/AnsiballZ_systemd_service.py'
Jan 26 09:59:43 compute-0 sudo[240885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:43.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:43 compute-0 ceph-mon[74456]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:59:43 compute-0 python3.9[240887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:43 compute-0 sudo[240885]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:44 compute-0 sudo[241039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buehkhzkoklbgpfenzubejvntatuxcnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421584.0287213-1333-77810425187841/AnsiballZ_systemd_service.py'
Jan 26 09:59:44 compute-0 sudo[241039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:44 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:44 compute-0 python3.9[241041]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:44 compute-0 sudo[241039]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:59:45 compute-0 sudo[241193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpfpsbywxajesogqwzefufsstnmmnvjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421584.8355675-1333-220863058330214/AnsiballZ_systemd_service.py'
Jan 26 09:59:45 compute-0 sudo[241193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:45 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:45 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:59:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:45.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:59:45 compute-0 python3.9[241195]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:45 compute-0 sudo[241193]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:45.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:45 compute-0 sudo[241346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beytqearxpgewesjqzhaobhoabjjjjcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421585.5711927-1333-97740648500713/AnsiballZ_systemd_service.py'
Jan 26 09:59:45 compute-0 sudo[241346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:46 compute-0 python3.9[241348]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:46 compute-0 ceph-mon[74456]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 09:59:46 compute-0 sudo[241346]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/095946 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 09:59:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:46 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:59:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:59:46 compute-0 sudo[241501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwlwultybyecucpanekrxysksmsyjnqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421586.3110015-1333-6062922615696/AnsiballZ_systemd_service.py'
Jan 26 09:59:46 compute-0 sudo[241501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:46 compute-0 python3.9[241503]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 09:59:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:59:46 compute-0 sudo[241501]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:47.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:59:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:47 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:47 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd400001140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:59:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:47.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:59:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:59:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:47.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:59:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:48 compute-0 ceph-mon[74456]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:59:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:48 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 09:59:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:59:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:59:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:59:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:59:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 09:59:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 09:59:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:59:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 09:59:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:49 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:49 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:59:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:49.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:59:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:49.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:50 compute-0 sudo[241656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzsxaipiwmffxvzaqgxhyyzvgajifdqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421589.845511-1510-269526402627593/AnsiballZ_file.py'
Jan 26 09:59:50 compute-0 sudo[241656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:50 compute-0 ceph-mon[74456]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 09:59:50 compute-0 python3.9[241658]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:50 compute-0 sudo[241656]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:50 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd400001c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:50 compute-0 sudo[241810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysrgfcgkfazywggoracbtwmwssqijeby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421590.558225-1510-103998452377787/AnsiballZ_file.py'
Jan 26 09:59:50 compute-0 sudo[241810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:59:51 compute-0 python3.9[241812]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:51 compute-0 sudo[241810]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:51 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:51 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:51.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:51 compute-0 sudo[241962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oknfsicbcgldwxkkisdahgnekojthgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421591.4107256-1510-5498877613714/AnsiballZ_file.py'
Jan 26 09:59:51 compute-0 sudo[241962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:51 compute-0 python3.9[241964]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:51 compute-0 sudo[241962]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:52 compute-0 ceph-mon[74456]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 09:59:52 compute-0 sudo[242115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-valcvcwutjjeoazolhrvqrsqylseukag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421592.0446181-1510-149943853391269/AnsiballZ_file.py'
Jan 26 09:59:52 compute-0 sudo[242115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:52 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4040016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:52 compute-0 python3.9[242117]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:52 compute-0 sudo[242115]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:59:53 compute-0 sudo[242268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwewhgbtbiqqxwcmsaoynejwvameewuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421592.775701-1510-233086042928615/AnsiballZ_file.py'
Jan 26 09:59:53 compute-0 sudo[242268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:53 compute-0 python3.9[242270]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:53 compute-0 sudo[242268]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:53 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd400001c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:53 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:53.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:53.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:53 compute-0 sudo[242420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbikwuvjrfcqyljwgccyupbckgnhslr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421593.400456-1510-184498660999249/AnsiballZ_file.py'
Jan 26 09:59:53 compute-0 sudo[242420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:53 compute-0 python3.9[242422]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:53 compute-0 sudo[242420]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:54 compute-0 sudo[242572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxtoajpdmwysefhkhmjpvkkcgrdabygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421594.0568948-1510-247134262405722/AnsiballZ_file.py'
Jan 26 09:59:54 compute-0 sudo[242572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:54 compute-0 ceph-mon[74456]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:59:54 compute-0 python3.9[242574]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:54 compute-0 sudo[242572]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:54 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:59:54.683 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 09:59:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:59:54.683 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 09:59:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 09:59:54.683 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 09:59:54 compute-0 sudo[242726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agotdqgimyncrswhcrjromjywfzxewkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421594.6938286-1510-190088838391028/AnsiballZ_file.py'
Jan 26 09:59:54 compute-0 sudo[242726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:59:55 compute-0 python3.9[242728]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:55 compute-0 sudo[242726]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:55 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:55 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd400001c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:55.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:56 compute-0 ceph-mon[74456]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 09:59:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:56 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:56 compute-0 sudo[242880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcxrhumeqhqqdaxjtturuxnibxpcqraw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421596.3549361-1681-209870282370383/AnsiballZ_file.py'
Jan 26 09:59:56 compute-0 sudo[242880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:56] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:59:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:09:59:56] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 09:59:56 compute-0 python3.9[242882]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:56 compute-0 sudo[242880]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:57.062Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 09:59:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T09:59:57.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 09:59:57 compute-0 sudo[243032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgfpgjodjvdvjazokcvoyzxsnqpmnwah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421596.9340065-1681-240013440385349/AnsiballZ_file.py'
Jan 26 09:59:57 compute-0 sudo[243032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:57 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:57 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:57 compute-0 python3.9[243034]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:57 compute-0 sudo[243032]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:57.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:59:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:57.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:59:57 compute-0 sudo[243184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vayddqxwhwzwuwrjdqthvxkqcvrdrtnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421597.5321612-1681-35788414052473/AnsiballZ_file.py'
Jan 26 09:59:57 compute-0 sudo[243184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:57 compute-0 python3.9[243186]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:57 compute-0 sudo[243184]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 09:59:58 compute-0 ceph-mon[74456]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:58 compute-0 sudo[243338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixqcxtzgyrdxbdeskjjbgfjuwhbkdzmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421598.1533022-1681-209823078746949/AnsiballZ_file.py'
Jan 26 09:59:58 compute-0 sudo[243338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:58 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4000030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:58 compute-0 python3.9[243340]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:58 compute-0 sudo[243338]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 09:59:59 compute-0 sudo[243504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vugbaklsztirdqceqdsklvgqngjcmoum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421598.8360927-1681-146698498060166/AnsiballZ_file.py'
Jan 26 09:59:59 compute-0 sudo[243504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:59 compute-0 podman[243464]: 2026-01-26 09:59:59.144110193 +0000 UTC m=+0.072772651 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 09:59:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:59 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:59 compute-0 python3.9[243510]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:59 compute-0 sudo[243504]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:59 compute-0 sudo[243511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 09:59:59 compute-0 sudo[243511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 09:59:59 compute-0 sudo[243511]: pam_unix(sudo:session): session closed for user root
Jan 26 09:59:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 09:59:59 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 09:59:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 09:59:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:09:59:59.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 09:59:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 09:59:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 09:59:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:09:59:59.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 09:59:59 compute-0 sudo[243685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veenehdmgoglwzbqvpnbnluzohufruxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421599.442845-1681-28416134075060/AnsiballZ_file.py'
Jan 26 09:59:59 compute-0 sudo[243685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 09:59:59 compute-0 python3.9[243687]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 09:59:59 compute-0 sudo[243685]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 10:00:00 compute-0 sudo[243838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahmithyluywfynuzujawayiiwutcuwpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421600.1271539-1681-52616052971738/AnsiballZ_file.py'
Jan 26 10:00:00 compute-0 sudo[243838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:00 compute-0 ceph-mon[74456]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:00 compute-0 ceph-mon[74456]: overall HEALTH_OK
Jan 26 10:00:00 compute-0 python3.9[243841]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 10:00:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:00 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:00 compute-0 sudo[243838]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:00:01 compute-0 sudo[243991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjaqhpiklmdadtyzxhbbbmbthewszvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421600.7543004-1681-212203908711790/AnsiballZ_file.py'
Jan 26 10:00:01 compute-0 sudo[243991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:01 compute-0 python3.9[243993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 10:00:01 compute-0 sudo[243991]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:01 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4000030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:01 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:01.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:01 compute-0 ceph-mon[74456]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:00:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:02 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:02 compute-0 sudo[244145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaegepqidsfvwmnfmfawugnmqsgmnrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421602.4684892-1855-273466874447605/AnsiballZ_command.py'
Jan 26 10:00:02 compute-0 sudo[244145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:02 compute-0 python3.9[244147]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:02 compute-0 sudo[244145]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:03 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:03 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4000030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:03.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:00:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:03.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:00:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:00:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:03 compute-0 python3.9[244299]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 10:00:04 compute-0 ceph-mon[74456]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:04 compute-0 sudo[244451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kabqpekaodqtympcfastucbafzpophdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421604.21245-1909-100231035050468/AnsiballZ_systemd_service.py'
Jan 26 10:00:04 compute-0 sudo[244451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:04 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:04 compute-0 python3.9[244453]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 10:00:04 compute-0 systemd[1]: Reloading.
Jan 26 10:00:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:00:05 compute-0 systemd-rc-local-generator[244482]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 10:00:05 compute-0 systemd-sysv-generator[244485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 10:00:05 compute-0 sudo[244451]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:05 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:05 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:00:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:05.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:00:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:05.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:05 compute-0 sudo[244639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgcowzydstilfpltztzkriiewvfqfquv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421605.4675248-1933-106350212577053/AnsiballZ_command.py'
Jan 26 10:00:05 compute-0 sudo[244639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:05 compute-0 python3.9[244641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:05 compute-0 sudo[244639]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:06 compute-0 ceph-mon[74456]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.109109) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421606109143, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 862, "num_deletes": 251, "total_data_size": 1451955, "memory_usage": 1484096, "flush_reason": "Manual Compaction"}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421606118754, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1418430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19118, "largest_seqno": 19979, "table_properties": {"data_size": 1414143, "index_size": 2003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9454, "raw_average_key_size": 19, "raw_value_size": 1405560, "raw_average_value_size": 2898, "num_data_blocks": 90, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769421533, "oldest_key_time": 1769421533, "file_creation_time": 1769421606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 9683 microseconds, and 3822 cpu microseconds.
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.118790) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1418430 bytes OK
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.118807) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.120298) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.120310) EVENT_LOG_v1 {"time_micros": 1769421606120306, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.120324) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1447842, prev total WAL file size 1447842, number of live WAL files 2.
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.120798) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1385KB)], [41(13MB)]
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421606120853, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 15249973, "oldest_snapshot_seqno": -1}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4968 keys, 13068280 bytes, temperature: kUnknown
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421606197042, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13068280, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13033773, "index_size": 20957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126687, "raw_average_key_size": 25, "raw_value_size": 12942474, "raw_average_value_size": 2605, "num_data_blocks": 861, "num_entries": 4968, "num_filter_entries": 4968, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769421606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.197310) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13068280 bytes
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.198667) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.0 rd, 171.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.2 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(20.0) write-amplify(9.2) OK, records in: 5484, records dropped: 516 output_compression: NoCompression
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.198687) EVENT_LOG_v1 {"time_micros": 1769421606198678, "job": 20, "event": "compaction_finished", "compaction_time_micros": 76262, "compaction_time_cpu_micros": 24906, "output_level": 6, "num_output_files": 1, "total_output_size": 13068280, "num_input_records": 5484, "num_output_records": 4968, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421606199694, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421606202654, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.120736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.202853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.202858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.202860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.202862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:00:06 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:00:06.202864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:00:06 compute-0 sudo[244792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmwazctxudcugsygmyvlhzkpqrljcjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421606.0244696-1933-106091704597762/AnsiballZ_command.py'
Jan 26 10:00:06 compute-0 sudo[244792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:06 compute-0 python3.9[244794]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:06 compute-0 sudo[244792]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:06 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4000041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:06 compute-0 sudo[244947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fecbirsxdhwaxftzglfcvihtsvaujtwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421606.652039-1933-258738482525913/AnsiballZ_command.py'
Jan 26 10:00:06 compute-0 sudo[244947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:07.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:00:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:07.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:00:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:07.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:00:07 compute-0 python3.9[244949]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:07 compute-0 sudo[244947]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:07 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:07 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:00:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:07.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:00:07 compute-0 sudo[245100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbolakuxkvsemerbwisprspwokiftqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421607.31198-1933-170545009169122/AnsiballZ_command.py'
Jan 26 10:00:07 compute-0 sudo[245100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:07.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:07 compute-0 python3.9[245102]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:07 compute-0 sudo[245100]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:08 compute-0 ceph-mon[74456]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:08 compute-0 sudo[245253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmhgpttwejbqwygooglcriribzievfon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421608.005224-1933-117023706432321/AnsiballZ_command.py'
Jan 26 10:00:08 compute-0 sudo[245253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:08 compute-0 python3.9[245255]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:08 compute-0 sudo[245253]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:08 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:08 compute-0 sudo[245408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocefeybazwvyutjlyyjmfqxxpwzjhqtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421608.6292348-1933-133318000972278/AnsiballZ_command.py'
Jan 26 10:00:08 compute-0 sudo[245408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:09 compute-0 python3.9[245410]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:09 compute-0 sudo[245408]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:09 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4000041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:09 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:09 compute-0 sudo[245561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmlqnlwueghjrxkgimpakfppxxckorsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421609.2352426-1933-147128051649739/AnsiballZ_command.py'
Jan 26 10:00:09 compute-0 sudo[245561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:09.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:09 compute-0 python3.9[245563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:09 compute-0 sudo[245561]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:10 compute-0 sudo[245714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbunyubedgpkjdvbkvjrpdqgiqkzcfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421609.8525565-1933-32954812857132/AnsiballZ_command.py'
Jan 26 10:00:10 compute-0 sudo[245714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:10 compute-0 ceph-mon[74456]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:10 compute-0 python3.9[245716]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 10:00:10 compute-0 sudo[245714]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:10 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:00:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:11 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:11 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4000041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:11.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:12 compute-0 ceph-mon[74456]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:00:12 compute-0 podman[245796]: 2026-01-26 10:00:12.185137697 +0000 UTC m=+0.113405830 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 10:00:12 compute-0 sudo[245896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snsxzszmeollktabgwfyorsjocovqmdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421611.965231-2140-167381524376291/AnsiballZ_file.py'
Jan 26 10:00:12 compute-0 sudo[245896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:12 compute-0 python3.9[245898]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:12 compute-0 sudo[245896]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:12 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:12 compute-0 sudo[246050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzqeupxwsuovcnyssjbqryqnurpojhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421612.5932622-2140-109329012043510/AnsiballZ_file.py'
Jan 26 10:00:12 compute-0 sudo[246050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:13 compute-0 python3.9[246052]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:13 compute-0 sudo[246050]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:13 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:13 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd404003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:13.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:13 compute-0 sudo[246202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysgodstktaehchcnrduntczjlfefvprt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421613.1652284-2140-255614521406252/AnsiballZ_file.py'
Jan 26 10:00:13 compute-0 sudo[246202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:13 compute-0 python3.9[246204]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:13 compute-0 sudo[246202]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:13.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:14 compute-0 ceph-mon[74456]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:14 compute-0 sudo[246356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbebvrtvfadanvemzfhbwuwuyzgtnegm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421614.3422897-2206-184173746285729/AnsiballZ_file.py'
Jan 26 10:00:14 compute-0 sudo[246356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:14 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd4000041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:14 compute-0 python3.9[246358]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:14 compute-0 sudo[246356]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:00:15 compute-0 sudo[246510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gafsborrljhtxqizepdftnbugnzalpkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421614.9468715-2206-48032431759858/AnsiballZ_file.py'
Jan 26 10:00:15 compute-0 sudo[246510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:15 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd41c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:15 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd3f8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:15.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:15 compute-0 python3.9[246512]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:15 compute-0 sudo[246510]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:15.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:15 compute-0 sudo[246662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cukrilallhafcoawrtckxrnycxnvldba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421615.6476622-2206-36331312994620/AnsiballZ_file.py'
Jan 26 10:00:15 compute-0 sudo[246662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:16 compute-0 python3.9[246664]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:16 compute-0 sudo[246662]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:16 compute-0 ceph-mon[74456]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:00:16 compute-0 sudo[246816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrhyswwruvmitaiavfrknaoeskiqxjwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421616.2403624-2206-187221693781425/AnsiballZ_file.py'
Jan 26 10:00:16 compute-0 sudo[246816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:16 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:16] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:16] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:16 compute-0 python3.9[246818]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:16 compute-0 sudo[246816]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:17.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:00:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:17.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:00:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:17.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:00:17 compute-0 sudo[246968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oafppnjhfdrylwvemfkxgpqhnqbovwzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421616.8334959-2206-154166148788050/AnsiballZ_file.py'
Jan 26 10:00:17 compute-0 sudo[246968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:17 compute-0 python3.9[246970]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:17 compute-0 kernel: ganesha.nfsd[240721]: segfault at 50 ip 00007fd4a4e0d32e sp 00007fd4237fd210 error 4 in libntirpc.so.5.8[7fd4a4df2000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 26 10:00:17 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 10:00:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[238795]: 26/01/2026 10:00:17 : epoch 69773b02 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd410002260 fd 39 proxy ignored for local
Jan 26 10:00:17 compute-0 sudo[246968]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:17 compute-0 systemd[1]: Started Process Core Dump (PID 246971/UID 0).
Jan 26 10:00:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:17.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:17 compute-0 sudo[247122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgiqygoubujxrgjjtkdilrgmelvacxaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421617.485693-2206-184339512528736/AnsiballZ_file.py'
Jan 26 10:00:17 compute-0 sudo[247122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:17 compute-0 python3.9[247124]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:17 compute-0 sudo[247122]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:18 compute-0 sudo[247274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grhsepsgsevzsxcshqjxygkoqllhzwrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421618.0480213-2206-232861704879218/AnsiballZ_file.py'
Jan 26 10:00:18 compute-0 sudo[247274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:18 compute-0 ceph-mon[74456]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:18 compute-0 python3.9[247276]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:18 compute-0 systemd-coredump[246972]: Process 238799 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007fd4a4e0d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 10:00:18 compute-0 sudo[247274]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:18 compute-0 systemd[1]: systemd-coredump@11-246971-0.service: Deactivated successfully.
Jan 26 10:00:18 compute-0 systemd[1]: systemd-coredump@11-246971-0.service: Consumed 1.207s CPU time.
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:00:18
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.meta', 'images', '.mgr', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data']
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:00:18 compute-0 podman[247307]: 2026-01-26 10:00:18.663813188 +0000 UTC m=+0.024770805 container died 8298fb22e0040193cca53081e1416924e318bb3b793d38d94cdf8b0ddecaa55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:00:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeead654aef501917d4d6d8751252b3cc8d8a703d844ee8a03c8273b8d29a8de-merged.mount: Deactivated successfully.
Jan 26 10:00:18 compute-0 podman[247307]: 2026-01-26 10:00:18.704626961 +0000 UTC m=+0.065584578 container remove 8298fb22e0040193cca53081e1416924e318bb3b793d38d94cdf8b0ddecaa55e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 10:00:18 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 10:00:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:00:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:00:18 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 10:00:18 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.424s CPU time.
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:00:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:19 compute-0 sudo[247350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:00:19 compute-0 sudo[247350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:19 compute-0 sudo[247350]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:19.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:19.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:20 compute-0 ceph-mon[74456]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:00:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:21 compute-0 ceph-mon[74456]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:00:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:21.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:22 compute-0 sshd-session[247377]: Invalid user oracle from 157.245.76.178 port 45608
Jan 26 10:00:22 compute-0 sshd-session[247377]: Connection closed by invalid user oracle 157.245.76.178 port 45608 [preauth]
Jan 26 10:00:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:00:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9188 writes, 34K keys, 9188 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9188 writes, 2303 syncs, 3.99 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 773 writes, 1164 keys, 773 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 773 writes, 386 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 10:00:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:23.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:24 compute-0 ceph-mon[74456]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:24 compute-0 sshd-session[247383]: banner exchange: Connection from 65.49.1.52 port 49244: invalid format
Jan 26 10:00:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100024 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:00:24 compute-0 sudo[247509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwnadahbkpqtpjcyrqirbfumqcktqmnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421624.5338428-2531-23779387784357/AnsiballZ_getent.py'
Jan 26 10:00:24 compute-0 sudo[247509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:25 compute-0 python3.9[247511]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 26 10:00:25 compute-0 sudo[247509]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:25.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:26 compute-0 sudo[247662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeblewpmngijffzqmrecjaosswrmptfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421625.548935-2555-194231328626145/AnsiballZ_group.py'
Jan 26 10:00:26 compute-0 sudo[247662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:26 compute-0 ceph-mon[74456]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:26 compute-0 python3.9[247664]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 10:00:26 compute-0 groupadd[247665]: group added to /etc/group: name=nova, GID=42436
Jan 26 10:00:26 compute-0 groupadd[247665]: group added to /etc/gshadow: name=nova
Jan 26 10:00:26 compute-0 groupadd[247665]: new group: name=nova, GID=42436
Jan 26 10:00:26 compute-0 sudo[247662]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:26] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:26] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:26 compute-0 sudo[247822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgheoqhjlzkicsiblqeknvptgoirtwjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421626.4777908-2579-20096121965161/AnsiballZ_user.py'
Jan 26 10:00:26 compute-0 sudo[247822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:00:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:27.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:00:27 compute-0 python3.9[247824]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 10:00:27 compute-0 useradd[247826]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 26 10:00:27 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 10:00:27 compute-0 useradd[247826]: add 'nova' to group 'libvirt'
Jan 26 10:00:27 compute-0 useradd[247826]: add 'nova' to shadow group 'libvirt'
Jan 26 10:00:27 compute-0 sudo[247822]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000051s ======
Jan 26 10:00:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:27.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000051s
Jan 26 10:00:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:27.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:28 compute-0 ceph-mon[74456]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:00:28 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 12.
Jan 26 10:00:28 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:00:28 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.424s CPU time.
Jan 26 10:00:28 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 10:00:28 compute-0 sshd-session[247860]: Accepted publickey for zuul from 192.168.122.30 port 49176 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 10:00:28 compute-0 systemd-logind[787]: New session 55 of user zuul.
Jan 26 10:00:28 compute-0 systemd[1]: Started Session 55 of User zuul.
Jan 26 10:00:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:00:28 compute-0 sshd-session[247860]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 10:00:29 compute-0 sshd-session[247874]: Received disconnect from 192.168.122.30 port 49176:11: disconnected by user
Jan 26 10:00:29 compute-0 sshd-session[247874]: Disconnected from user zuul 192.168.122.30 port 49176
Jan 26 10:00:29 compute-0 sshd-session[247860]: pam_unix(sshd:session): session closed for user zuul
Jan 26 10:00:29 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 26 10:00:29 compute-0 systemd-logind[787]: Session 55 logged out. Waiting for processes to exit.
Jan 26 10:00:29 compute-0 systemd-logind[787]: Removed session 55.
Jan 26 10:00:29 compute-0 podman[247933]: 2026-01-26 10:00:29.167415741 +0000 UTC m=+0.047746067 container create 5defd66b224a0d8937dd38707979a5e755fa2673724a935f93a74104c800b708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:00:29 compute-0 podman[247933]: 2026-01-26 10:00:29.143049358 +0000 UTC m=+0.023379734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a302c9e56bdef9ff79dd2ccaecbcff7b0c76b4ce52722c388b34a51c46a63a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a302c9e56bdef9ff79dd2ccaecbcff7b0c76b4ce52722c388b34a51c46a63a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a302c9e56bdef9ff79dd2ccaecbcff7b0c76b4ce52722c388b34a51c46a63a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a302c9e56bdef9ff79dd2ccaecbcff7b0c76b4ce52722c388b34a51c46a63a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:29 compute-0 podman[247933]: 2026-01-26 10:00:29.254075061 +0000 UTC m=+0.134405417 container init 5defd66b224a0d8937dd38707979a5e755fa2673724a935f93a74104c800b708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:00:29 compute-0 podman[247946]: 2026-01-26 10:00:29.256182711 +0000 UTC m=+0.062113288 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 10:00:29 compute-0 podman[247933]: 2026-01-26 10:00:29.259245152 +0000 UTC m=+0.139575478 container start 5defd66b224a0d8937dd38707979a5e755fa2673724a935f93a74104c800b708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 10:00:29 compute-0 bash[247933]: 5defd66b224a0d8937dd38707979a5e755fa2673724a935f93a74104c800b708
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 10:00:29 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 10:00:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:00:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:29.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:29.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:29 compute-0 python3.9[248134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:30 compute-0 ceph-mon[74456]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:00:30 compute-0 python3.9[248255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421629.3701231-2654-154103426955357/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:30 compute-0 python3.9[248407]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:00:31 compute-0 python3.9[248483]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:31.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000051s ======
Jan 26 10:00:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:31.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000051s
Jan 26 10:00:31 compute-0 python3.9[248633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:32 compute-0 ceph-mon[74456]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:00:32 compute-0 python3.9[248755]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421631.5095506-2654-52265111527077/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:00:33 compute-0 python3.9[248906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:33.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:33 compute-0 python3.9[249027]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421632.769446-2654-74324946579079/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:33.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:00:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:34 compute-0 python3.9[249177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:34 compute-0 ceph-mon[74456]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:00:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:34 compute-0 python3.9[249300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421633.860953-2654-194092597390262/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:00:35 compute-0 python3.9[249450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:00:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:00:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:35.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:35.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:35 compute-0 python3.9[249571]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421634.9019895-2654-62360827033168/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:36 compute-0 ceph-mon[74456]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:00:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:00:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:37.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:00:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000051s ======
Jan 26 10:00:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:37.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000051s
Jan 26 10:00:37 compute-0 sudo[249723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwhfqniglrwmfuyzwijkhazfascravec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421637.1840553-2903-250427080004409/AnsiballZ_file.py'
Jan 26 10:00:37 compute-0 sudo[249723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:37 compute-0 python3.9[249725]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 10:00:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:37.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:37 compute-0 sudo[249723]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:38 compute-0 sudo[249875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgwfswjynvccttsqpmoljvwcmiskadqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421638.034883-2927-47101982871704/AnsiballZ_copy.py'
Jan 26 10:00:38 compute-0 sudo[249875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:38 compute-0 python3.9[249877]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 10:00:38 compute-0 sudo[249875]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:38 compute-0 ceph-mon[74456]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:00:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:00:39 compute-0 sudo[250029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqfhxlwdlhcemsgdfisgxrlbnfxtudow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421638.771471-2951-176832795942354/AnsiballZ_stat.py'
Jan 26 10:00:39 compute-0 sudo[250029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:39 compute-0 python3.9[250031]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 10:00:39 compute-0 sudo[250029]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:39.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:39 compute-0 sudo[250056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:00:39 compute-0 sudo[250056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:39 compute-0 sudo[250056]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:39 compute-0 ceph-mon[74456]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:00:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000051s ======
Jan 26 10:00:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:39.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000051s
Jan 26 10:00:39 compute-0 sudo[250206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdmeicpqstulwmvmvqeihazrfsgathht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421639.580922-2975-265623197272552/AnsiballZ_stat.py'
Jan 26 10:00:39 compute-0 sudo[250206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:40 compute-0 python3.9[250208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:40 compute-0 sudo[250206]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:40 compute-0 sudo[250331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfqsahkncrvcdboykiapnvmjtpagmag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421639.580922-2975-265623197272552/AnsiballZ_copy.py'
Jan 26 10:00:40 compute-0 sudo[250331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:40 compute-0 python3.9[250333]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769421639.580922-2975-265623197272552/.source _original_basename=.718i13rn follow=False checksum=aa772f08cc6f4ae92a92a63719b814eae31967e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 26 10:00:40 compute-0 sudo[250331]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:40 compute-0 sudo[250360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:00:40 compute-0 sudo[250360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:40 compute-0 sudo[250360]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:40 compute-0 sudo[250385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:00:40 compute-0 sudo[250385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:00:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:00:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:00:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:00:41 compute-0 sudo[250385]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b38000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 10:00:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 10:00:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000051s ======
Jan 26 10:00:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:41.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000051s
Jan 26 10:00:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:00:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:00:41 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:41.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:41 compute-0 python3.9[250581]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:00:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:00:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:00:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:00:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:00:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:00:42 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:00:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:00:42 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:00:42 compute-0 sudo[250660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:00:42 compute-0 sudo[250660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:42 compute-0 sudo[250660]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:42 compute-0 sudo[250688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:00:42 compute-0 sudo[250688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:42 compute-0 podman[250685]: 2026-01-26 10:00:42.54816437 +0000 UTC m=+0.125472749 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 10:00:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:42 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:42 compute-0 podman[250853]: 2026-01-26 10:00:42.859280484 +0000 UTC m=+0.042152855 container create c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_spence, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:00:42 compute-0 systemd[1]: Started libpod-conmon-c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c.scope.
Jan 26 10:00:42 compute-0 python3.9[250835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:00:42 compute-0 podman[250853]: 2026-01-26 10:00:42.839824787 +0000 UTC m=+0.022697178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:00:42 compute-0 podman[250853]: 2026-01-26 10:00:42.939373301 +0000 UTC m=+0.122245702 container init c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 10:00:42 compute-0 podman[250853]: 2026-01-26 10:00:42.947823312 +0000 UTC m=+0.130695683 container start c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 10:00:42 compute-0 podman[250853]: 2026-01-26 10:00:42.951262662 +0000 UTC m=+0.134135053 container attach c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 10:00:42 compute-0 romantic_spence[250869]: 167 167
Jan 26 10:00:42 compute-0 systemd[1]: libpod-c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c.scope: Deactivated successfully.
Jan 26 10:00:42 compute-0 podman[250853]: 2026-01-26 10:00:42.956805781 +0000 UTC m=+0.139678152 container died c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_spence, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 10:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f0307465fda395410bf442f40a44991dcd9ee6e14db7fdf8ea724eefc0aa195-merged.mount: Deactivated successfully.
Jan 26 10:00:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:42 compute-0 podman[250853]: 2026-01-26 10:00:42.998241288 +0000 UTC m=+0.181113659 container remove c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_spence, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 10:00:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:00:43 compute-0 systemd[1]: libpod-conmon-c2d871ab4f22c0d2d5a847c506f2bdc2b7626dca8194ad233ad96633be076c4c.scope: Deactivated successfully.
Jan 26 10:00:43 compute-0 podman[250941]: 2026-01-26 10:00:43.199382211 +0000 UTC m=+0.065749267 container create ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:00:43 compute-0 systemd[1]: Started libpod-conmon-ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631.scope.
Jan 26 10:00:43 compute-0 podman[250941]: 2026-01-26 10:00:43.158133836 +0000 UTC m=+0.024500912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:00:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a108282ed68ad20aa48f701da0b5d2c65daa4001d0d593c2d0cc33574cbfeb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a108282ed68ad20aa48f701da0b5d2c65daa4001d0d593c2d0cc33574cbfeb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a108282ed68ad20aa48f701da0b5d2c65daa4001d0d593c2d0cc33574cbfeb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a108282ed68ad20aa48f701da0b5d2c65daa4001d0d593c2d0cc33574cbfeb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a108282ed68ad20aa48f701da0b5d2c65daa4001d0d593c2d0cc33574cbfeb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:43 compute-0 podman[250941]: 2026-01-26 10:00:43.282284986 +0000 UTC m=+0.148652062 container init ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:00:43 compute-0 podman[250941]: 2026-01-26 10:00:43.293540834 +0000 UTC m=+0.159907890 container start ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_feistel, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 10:00:43 compute-0 podman[250941]: 2026-01-26 10:00:43.297907102 +0000 UTC m=+0.164274178 container attach ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 10:00:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:43.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:43 compute-0 python3.9[251034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421642.1854138-3053-163918638812685/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=53b8456782b81b5794d3eef3fadcfb00db1088a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:43 compute-0 elegant_feistel[251002]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:00:43 compute-0 elegant_feistel[251002]: --> All data devices are unavailable
Jan 26 10:00:43 compute-0 podman[250941]: 2026-01-26 10:00:43.63878013 +0000 UTC m=+0.505147186 container died ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:00:43 compute-0 systemd[1]: libpod-ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631.scope: Deactivated successfully.
Jan 26 10:00:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a108282ed68ad20aa48f701da0b5d2c65daa4001d0d593c2d0cc33574cbfeb7-merged.mount: Deactivated successfully.
Jan 26 10:00:43 compute-0 podman[250941]: 2026-01-26 10:00:43.676915194 +0000 UTC m=+0.543282250 container remove ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_feistel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 10:00:43 compute-0 systemd[1]: libpod-conmon-ca9a07ad130537edd7939a837425ad3bb519a97aee6cb454cf9280d7422bd631.scope: Deactivated successfully.
Jan 26 10:00:43 compute-0 sudo[250688]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:43 compute-0 sudo[251081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:00:43 compute-0 sudo[251081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:43 compute-0 sudo[251081]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:43 compute-0 sudo[251106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:00:43 compute-0 sudo[251106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:43 compute-0 ceph-mon[74456]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:00:44 compute-0 podman[251195]: 2026-01-26 10:00:44.253753727 +0000 UTC m=+0.052790370 container create ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 10:00:44 compute-0 systemd[1]: Started libpod-conmon-ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa.scope.
Jan 26 10:00:44 compute-0 podman[251195]: 2026-01-26 10:00:44.225250277 +0000 UTC m=+0.024286960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:00:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:00:44 compute-0 podman[251195]: 2026-01-26 10:00:44.345329484 +0000 UTC m=+0.144366137 container init ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_meitner, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:00:44 compute-0 podman[251195]: 2026-01-26 10:00:44.35539713 +0000 UTC m=+0.154433783 container start ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 10:00:44 compute-0 podman[251195]: 2026-01-26 10:00:44.358319443 +0000 UTC m=+0.157356096 container attach ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_meitner, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:00:44 compute-0 confident_meitner[251240]: 167 167
Jan 26 10:00:44 compute-0 systemd[1]: libpod-ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa.scope: Deactivated successfully.
Jan 26 10:00:44 compute-0 podman[251195]: 2026-01-26 10:00:44.362451529 +0000 UTC m=+0.161488192 container died ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_meitner, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e56dacd67f45a637314e73c5c1f0d327148c50abe6423a5cc4db5f4d38d4d4ca-merged.mount: Deactivated successfully.
Jan 26 10:00:44 compute-0 podman[251195]: 2026-01-26 10:00:44.399758659 +0000 UTC m=+0.198795302 container remove ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_meitner, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:00:44 compute-0 systemd[1]: libpod-conmon-ea4fbe698bcafdce9970424ea6d69262a4c8b7834d35b0711fd781f90e8486fa.scope: Deactivated successfully.
Jan 26 10:00:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100044 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:00:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:44 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:44 compute-0 podman[251339]: 2026-01-26 10:00:44.634298139 +0000 UTC m=+0.060406199 container create 48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_boyd, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 10:00:44 compute-0 python3.9[251333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 10:00:44 compute-0 systemd[1]: Started libpod-conmon-48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d.scope.
Jan 26 10:00:44 compute-0 podman[251339]: 2026-01-26 10:00:44.608000555 +0000 UTC m=+0.034108705 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:00:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc13e561a7cfe02affc30e60b4e965caf572576274bb1b5a26d5d1e265d66cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc13e561a7cfe02affc30e60b4e965caf572576274bb1b5a26d5d1e265d66cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc13e561a7cfe02affc30e60b4e965caf572576274bb1b5a26d5d1e265d66cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc13e561a7cfe02affc30e60b4e965caf572576274bb1b5a26d5d1e265d66cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:44 compute-0 podman[251339]: 2026-01-26 10:00:44.74278507 +0000 UTC m=+0.168893170 container init 48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_boyd, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:00:44 compute-0 podman[251339]: 2026-01-26 10:00:44.751885365 +0000 UTC m=+0.177993465 container start 48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_boyd, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 10:00:44 compute-0 podman[251339]: 2026-01-26 10:00:44.756835175 +0000 UTC m=+0.182943275 container attach 48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_boyd, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 10:00:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]: {
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:     "0": [
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:         {
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "devices": [
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "/dev/loop3"
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             ],
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "lv_name": "ceph_lv0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "lv_size": "21470642176",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "name": "ceph_lv0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "tags": {
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.cluster_name": "ceph",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.crush_device_class": "",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.encrypted": "0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.osd_id": "0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.type": "block",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.vdo": "0",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:                 "ceph.with_tpm": "0"
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             },
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "type": "block",
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:             "vg_name": "ceph_vg0"
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:         }
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]:     ]
Jan 26 10:00:45 compute-0 hardcore_boyd[251356]: }
Jan 26 10:00:45 compute-0 systemd[1]: libpod-48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d.scope: Deactivated successfully.
Jan 26 10:00:45 compute-0 podman[251339]: 2026-01-26 10:00:45.132366474 +0000 UTC m=+0.558474534 container died 48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_boyd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 10:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc13e561a7cfe02affc30e60b4e965caf572576274bb1b5a26d5d1e265d66cf-merged.mount: Deactivated successfully.
Jan 26 10:00:45 compute-0 podman[251339]: 2026-01-26 10:00:45.182121186 +0000 UTC m=+0.608229256 container remove 48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_boyd, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 10:00:45 compute-0 systemd[1]: libpod-conmon-48c5c47fa8a9935ecbc9d2a282f068925c18602c54233f392929f19e7a1be46d.scope: Deactivated successfully.
Jan 26 10:00:45 compute-0 sudo[251106]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:45 compute-0 sudo[251499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:00:45 compute-0 sudo[251499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:45 compute-0 sudo[251499]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:45 compute-0 python3.9[251491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769421644.1897056-3098-168168581495396/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0333d3a3f5c3a0526b0ebe430250032166710e8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 10:00:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:45 compute-0 sudo[251524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:00:45 compute-0 sudo[251524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:45.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:45 compute-0 podman[251665]: 2026-01-26 10:00:45.953322278 +0000 UTC m=+0.055018337 container create 79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:00:46 compute-0 systemd[1]: Started libpod-conmon-79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe.scope.
Jan 26 10:00:46 compute-0 podman[251665]: 2026-01-26 10:00:45.92732702 +0000 UTC m=+0.029023169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:00:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:00:46 compute-0 podman[251665]: 2026-01-26 10:00:46.054969842 +0000 UTC m=+0.156665981 container init 79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:00:46 compute-0 podman[251665]: 2026-01-26 10:00:46.067844284 +0000 UTC m=+0.169540353 container start 79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 10:00:46 compute-0 podman[251665]: 2026-01-26 10:00:46.073535902 +0000 UTC m=+0.175231981 container attach 79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 10:00:46 compute-0 exciting_engelbart[251684]: 167 167
Jan 26 10:00:46 compute-0 systemd[1]: libpod-79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe.scope: Deactivated successfully.
Jan 26 10:00:46 compute-0 conmon[251684]: conmon 79c035ec429efa3484a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe.scope/container/memory.events
Jan 26 10:00:46 compute-0 podman[251665]: 2026-01-26 10:00:46.0777095 +0000 UTC m=+0.179405549 container died 79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 26 10:00:46 compute-0 ceph-mon[74456]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:00:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfee5c9d4a2b4e50acf09050fba87b0cfb3a9aa73765da4281bc0a974d5f4d64-merged.mount: Deactivated successfully.
Jan 26 10:00:46 compute-0 podman[251665]: 2026-01-26 10:00:46.123711215 +0000 UTC m=+0.225407264 container remove 79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_engelbart, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 10:00:46 compute-0 systemd[1]: libpod-conmon-79c035ec429efa3484a4aae59a546c3e01d3251b86d0e401a047edfc5fd31dbe.scope: Deactivated successfully.
Jan 26 10:00:46 compute-0 sudo[251772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lavtfvuoljsefsilktumuzqkadtqrbfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421645.7725246-3149-237208122439414/AnsiballZ_container_config_data.py'
Jan 26 10:00:46 compute-0 sudo[251772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:46 compute-0 ceph-osd[82841]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000043s
Jan 26 10:00:46 compute-0 podman[251779]: 2026-01-26 10:00:46.298149144 +0000 UTC m=+0.048675636 container create 595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_perlman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:00:46 compute-0 systemd[1]: Started libpod-conmon-595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a.scope.
Jan 26 10:00:46 compute-0 podman[251779]: 2026-01-26 10:00:46.275486238 +0000 UTC m=+0.026012720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:00:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e327062223047fa9aabd2003c310dcd5bbd6cf2b61f28dd9f5e634fb94f54131/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e327062223047fa9aabd2003c310dcd5bbd6cf2b61f28dd9f5e634fb94f54131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e327062223047fa9aabd2003c310dcd5bbd6cf2b61f28dd9f5e634fb94f54131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e327062223047fa9aabd2003c310dcd5bbd6cf2b61f28dd9f5e634fb94f54131/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:00:46 compute-0 podman[251779]: 2026-01-26 10:00:46.406320988 +0000 UTC m=+0.156847500 container init 595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_perlman, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:00:46 compute-0 podman[251779]: 2026-01-26 10:00:46.41478055 +0000 UTC m=+0.165307052 container start 595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:00:46 compute-0 podman[251779]: 2026-01-26 10:00:46.418557858 +0000 UTC m=+0.169084360 container attach 595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:00:46 compute-0 python3.9[251780]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 26 10:00:46 compute-0 sudo[251772]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:46 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:00:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:00:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:47.068Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:00:47 compute-0 lvm[251920]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:00:47 compute-0 lvm[251920]: VG ceph_vg0 finished
Jan 26 10:00:47 compute-0 strange_perlman[251798]: {}
Jan 26 10:00:47 compute-0 systemd[1]: libpod-595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a.scope: Deactivated successfully.
Jan 26 10:00:47 compute-0 podman[251779]: 2026-01-26 10:00:47.298190128 +0000 UTC m=+1.048716600 container died 595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_perlman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:00:47 compute-0 systemd[1]: libpod-595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a.scope: Consumed 1.403s CPU time.
Jan 26 10:00:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:47.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:47 compute-0 sudo[252038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhuhnpkdasvhzjnhttgdmqaczcdqbnwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421647.1727428-3182-239638474882203/AnsiballZ_container_config_hash.py'
Jan 26 10:00:47 compute-0 sudo[252038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000052s ======
Jan 26 10:00:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000052s
Jan 26 10:00:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e327062223047fa9aabd2003c310dcd5bbd6cf2b61f28dd9f5e634fb94f54131-merged.mount: Deactivated successfully.
Jan 26 10:00:47 compute-0 podman[251779]: 2026-01-26 10:00:47.86791193 +0000 UTC m=+1.618438412 container remove 595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:00:47 compute-0 systemd[1]: libpod-conmon-595b128769c05e8cd87ec889a60a9321fb7c688cdb7711f6d43986586955b40a.scope: Deactivated successfully.
Jan 26 10:00:47 compute-0 sudo[251524]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:00:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:00:47 compute-0 python3.9[252040]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 10:00:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:47 compute-0 sudo[252038]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:48 compute-0 sudo[252042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:00:48 compute-0 sudo[252042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:48 compute-0 sudo[252042]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:48 compute-0 ceph-mon[74456]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:00:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:00:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:48 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:00:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:00:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:00:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:00:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:00:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:00:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:00:48 compute-0 sudo[252218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljxfvqqyuhywxuebmqezkgwrdlcokxka ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769421648.3739824-3212-57786444703057/AnsiballZ_edpm_container_manage.py'
Jan 26 10:00:48 compute-0 sudo[252218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:00:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:00:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:00:49 compute-0 python3[252220]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 10:00:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:49.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:49.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:50 compute-0 ceph-mon[74456]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:00:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:50 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 10:00:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:51.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:51.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:52 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:52 compute-0 ceph-mon[74456]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 10:00:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:00:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:53.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:53.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:54 compute-0 ceph-mon[74456]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:00:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:54 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:00:54.684 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:00:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:00:54.684 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:00:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:00:54.685 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:00:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:00:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:55.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:55.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:56 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:56] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 10:00:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:00:56] "GET /metrics HTTP/1.1" 200 48269 "" "Prometheus/2.51.0"
Jan 26 10:00:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:57.070Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:00:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:00:57.070Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:00:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000051s ======
Jan 26 10:00:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:57.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000051s
Jan 26 10:00:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000051s ======
Jan 26 10:00:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:57.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000051s
Jan 26 10:00:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:00:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:58 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:00:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:59 compute-0 podman[252234]: 2026-01-26 10:00:59.365789916 +0000 UTC m=+10.124706997 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 26 10:00:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:00:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:00:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:00:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:00:59.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:00:59 compute-0 podman[252331]: 2026-01-26 10:00:59.520578828 +0000 UTC m=+0.061095615 container create b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 10:00:59 compute-0 podman[252331]: 2026-01-26 10:00:59.486665826 +0000 UTC m=+0.027182663 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 26 10:00:59 compute-0 python3[252220]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 26 10:00:59 compute-0 sudo[252345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:00:59 compute-0 sudo[252345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:00:59 compute-0 sudo[252345]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:59 compute-0 sudo[252218]: pam_unix(sudo:session): session closed for user root
Jan 26 10:00:59 compute-0 podman[252381]: 2026-01-26 10:00:59.687919665 +0000 UTC m=+0.064663121 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 10:00:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:00:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:00:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:00:59.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:00 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:01:01 compute-0 ceph-mon[74456]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:01:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:01.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:01.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:01 compute-0 CROND[252443]: (root) CMD (run-parts /etc/cron.hourly)
Jan 26 10:01:01 compute-0 run-parts[252446]: (/etc/cron.hourly) starting 0anacron
Jan 26 10:01:01 compute-0 run-parts[252452]: (/etc/cron.hourly) finished 0anacron
Jan 26 10:01:01 compute-0 CROND[252442]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 26 10:01:02 compute-0 ceph-mon[74456]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:02 compute-0 ceph-mon[74456]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:02 compute-0 ceph-mon[74456]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:01:02 compute-0 sudo[252579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpoqqnfakrlgpgrzaisrhrhoxwmuqaod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421662.1114743-3236-31750707301468/AnsiballZ_stat.py'
Jan 26 10:01:02 compute-0 sudo[252579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:02 compute-0 python3.9[252581]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 10:01:02 compute-0 sudo[252579]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:02 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:03.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:01:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:03 compute-0 sudo[252734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goewnouycnpiykrdeawpdpvyjazzskbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421663.4442403-3272-42137391281012/AnsiballZ_container_config_data.py'
Jan 26 10:01:03 compute-0 sudo[252734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:03.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:03 compute-0 python3.9[252736]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 26 10:01:03 compute-0 sudo[252734]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:04 compute-0 ceph-mon[74456]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:04 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:01:05 compute-0 sudo[252888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdntbieqsxgvupgmyemhnihfjrdcfyhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421664.7723641-3305-6545078377927/AnsiballZ_container_config_hash.py'
Jan 26 10:01:05 compute-0 sudo[252888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:05 compute-0 python3.9[252890]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 10:01:05 compute-0 sudo[252888]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:05 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:05 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:05.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:05.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:06 compute-0 sshd-session[252915]: Invalid user oracle from 157.245.76.178 port 53938
Jan 26 10:01:06 compute-0 sshd-session[252915]: Connection closed by invalid user oracle 157.245.76.178 port 53938 [preauth]
Jan 26 10:01:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:06 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:01:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:06] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:01:06 compute-0 ceph-mon[74456]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:01:06 compute-0 sudo[253046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqswthqnbesuwalenbfrhlvggokbgpmi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769421666.660055-3335-105601442197904/AnsiballZ_edpm_container_manage.py'
Jan 26 10:01:06 compute-0 sudo[253046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:07.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:01:07 compute-0 python3[253048]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 10:01:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:07 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:07 compute-0 podman[253085]: 2026-01-26 10:01:07.42992124 +0000 UTC m=+0.057481726 container create 87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:01:07 compute-0 podman[253085]: 2026-01-26 10:01:07.401640403 +0000 UTC m=+0.029200919 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Jan 26 10:01:07 compute-0 python3[253048]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b kolla_start
Jan 26 10:01:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:07 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 10:01:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:07.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 10:01:07 compute-0 sudo[253046]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:07.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:07 compute-0 ceph-mon[74456]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:08 compute-0 sudo[253272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zogcpnsvjdvazrskjcwvboedpmlccuim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421668.096885-3359-223218637407228/AnsiballZ_stat.py'
Jan 26 10:01:08 compute-0 sudo[253272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:08 compute-0 python3.9[253274]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 10:01:08 compute-0 sudo[253272]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:08 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:09 compute-0 sudo[253427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwmwqjaqktudnkphrlbnfefkgykclbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421668.9337-3386-14138609998550/AnsiballZ_file.py'
Jan 26 10:01:09 compute-0 sudo[253427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:09 compute-0 sshd-session[252919]: Received disconnect from 117.50.196.2 port 47582:11:  [preauth]
Jan 26 10:01:09 compute-0 sshd-session[252919]: Disconnected from authenticating user root 117.50.196.2 port 47582 [preauth]
Jan 26 10:01:09 compute-0 python3.9[253429]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 10:01:09 compute-0 sudo[253427]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:09 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:09 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:09.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:09 compute-0 ceph-mon[74456]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:09.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:09 compute-0 sudo[253578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owdllfokcuaozxymozhpozavjurjnwyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421669.4202237-3386-16470312342133/AnsiballZ_copy.py'
Jan 26 10:01:09 compute-0 sudo[253578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:09 compute-0 python3.9[253580]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769421669.4202237-3386-16470312342133/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 10:01:10 compute-0 sudo[253578]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:10 compute-0 sudo[253654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urvsaachyyidpewkckuqrxyfwjgkeoah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421669.4202237-3386-16470312342133/AnsiballZ_systemd.py'
Jan 26 10:01:10 compute-0 sudo[253654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:10 compute-0 python3.9[253656]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 10:01:10 compute-0 systemd[1]: Reloading.
Jan 26 10:01:10 compute-0 systemd-rc-local-generator[253688]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 10:01:10 compute-0 systemd-sysv-generator[253691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 10:01:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:10 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:10 compute-0 sudo[253654]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:01:11 compute-0 sudo[253769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwvbfsbnjxtlbkcprusottvtrexnlili ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421669.4202237-3386-16470312342133/AnsiballZ_systemd.py'
Jan 26 10:01:11 compute-0 sudo[253769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:11 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:11 compute-0 python3.9[253771]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 10:01:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:11 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:11 compute-0 systemd[1]: Reloading.
Jan 26 10:01:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:11.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:11 compute-0 systemd-sysv-generator[253803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 10:01:11 compute-0 systemd-rc-local-generator[253796]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 10:01:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:01:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:11.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:11 compute-0 systemd[1]: Starting nova_compute container...
Jan 26 10:01:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:11 compute-0 podman[253811]: 2026-01-26 10:01:11.950328359 +0000 UTC m=+0.136849361 container init 87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:01:11 compute-0 podman[253811]: 2026-01-26 10:01:11.964291862 +0000 UTC m=+0.150812834 container start 87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, config_id=edpm, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 26 10:01:11 compute-0 podman[253811]: nova_compute
Jan 26 10:01:11 compute-0 nova_compute[253826]: + sudo -E kolla_set_configs
Jan 26 10:01:11 compute-0 systemd[1]: Started nova_compute container.
Jan 26 10:01:12 compute-0 sudo[253769]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Validating config file
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying service configuration files
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Deleting /etc/ceph
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Creating directory /etc/ceph
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Writing out command to execute
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:12 compute-0 nova_compute[253826]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 10:01:12 compute-0 nova_compute[253826]: ++ cat /run_command
Jan 26 10:01:12 compute-0 nova_compute[253826]: + CMD=nova-compute
Jan 26 10:01:12 compute-0 nova_compute[253826]: + ARGS=
Jan 26 10:01:12 compute-0 nova_compute[253826]: + sudo kolla_copy_cacerts
Jan 26 10:01:12 compute-0 nova_compute[253826]: + [[ ! -n '' ]]
Jan 26 10:01:12 compute-0 nova_compute[253826]: + . kolla_extend_start
Jan 26 10:01:12 compute-0 nova_compute[253826]: Running command: 'nova-compute'
Jan 26 10:01:12 compute-0 nova_compute[253826]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 10:01:12 compute-0 nova_compute[253826]: + umask 0022
Jan 26 10:01:12 compute-0 nova_compute[253826]: + exec nova-compute
Jan 26 10:01:12 compute-0 ceph-mon[74456]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:01:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:12 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:13 compute-0 podman[253867]: 2026-01-26 10:01:13.169980595 +0000 UTC m=+0.099753068 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 10:01:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:13 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:13 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:01:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:13.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:13 compute-0 python3.9[254020]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 10:01:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:01:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:14 compute-0 ceph-mon[74456]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:14 compute-0 nova_compute[253826]: 2026-01-26 10:01:14.414 253830 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 10:01:14 compute-0 nova_compute[253826]: 2026-01-26 10:01:14.415 253830 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 10:01:14 compute-0 nova_compute[253826]: 2026-01-26 10:01:14.415 253830 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 10:01:14 compute-0 nova_compute[253826]: 2026-01-26 10:01:14.415 253830 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 26 10:01:14 compute-0 nova_compute[253826]: 2026-01-26 10:01:14.593 253830 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:01:14 compute-0 nova_compute[253826]: 2026-01-26 10:01:14.623 253830 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:01:14 compute-0 nova_compute[253826]: 2026-01-26 10:01:14.624 253830 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 26 10:01:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:14 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:14 compute-0 python3.9[254175]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 10:01:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.153 253830 INFO nova.virt.driver [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.303 253830 INFO nova.compute.provider_config [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.366 253830 DEBUG oslo_concurrency.lockutils [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.367 253830 DEBUG oslo_concurrency.lockutils [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.367 253830 DEBUG oslo_concurrency.lockutils [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.367 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.368 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.368 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.368 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.368 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.368 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.368 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.368 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.369 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.369 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.369 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.369 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.369 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.369 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.369 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.370 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.370 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.370 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.370 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.370 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.370 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.371 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.371 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.371 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.371 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.371 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.371 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.372 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.372 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.372 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.372 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.372 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.372 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.372 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.373 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.373 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.373 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.373 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.373 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.373 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.374 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.374 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.374 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.374 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.374 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.374 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.375 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.375 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.375 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.375 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.375 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.376 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.376 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.376 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.376 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.376 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.376 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.377 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.377 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:15 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.377 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.377 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.377 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.378 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.378 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.378 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.378 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.378 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.378 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.379 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.379 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.379 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.379 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.379 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.380 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.380 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.380 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.380 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.380 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.381 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.381 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.381 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.381 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.381 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.382 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.382 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.382 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.382 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.382 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.383 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.383 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.383 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.383 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.383 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.384 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.384 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.384 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.384 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.384 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.384 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.385 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.385 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.385 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.385 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.385 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.386 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.386 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.386 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.386 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.386 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.387 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.387 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.387 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.387 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.388 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.388 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.388 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.388 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.388 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.389 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.389 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.389 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.389 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.389 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.390 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.390 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.390 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.390 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.390 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.391 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.391 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.391 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.391 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.391 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.392 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.392 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.392 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.392 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.392 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.393 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.393 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.393 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.393 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.394 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.394 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.394 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.394 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.394 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.395 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.395 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.395 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.396 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.396 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.396 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.396 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.396 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.397 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.397 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.397 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.397 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.398 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.398 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.398 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.399 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.399 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.399 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.399 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.399 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.400 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.400 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.400 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.400 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.400 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.401 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.401 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.401 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.401 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.402 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.402 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.402 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.402 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.403 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.403 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.403 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.404 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.404 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.404 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.404 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.404 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.405 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.405 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.405 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.406 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.406 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.406 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.406 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.406 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.407 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.407 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.407 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.407 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.407 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.408 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.408 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.408 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.408 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.408 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.409 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.409 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.409 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.409 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.410 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.410 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.410 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.410 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.410 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.410 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.411 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.411 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.411 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.411 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.411 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.412 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.412 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.412 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.412 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.412 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.413 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.413 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.413 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.413 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.413 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.414 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.414 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.414 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.414 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.414 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.415 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.415 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.415 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.415 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.415 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.416 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.416 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.416 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.416 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.416 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.417 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.417 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.417 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.417 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.417 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.418 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.418 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.418 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.418 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.418 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.419 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.419 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.419 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.419 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.419 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.420 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.420 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.420 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.420 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.420 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.421 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.421 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.421 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.421 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.421 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.421 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.422 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.422 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.422 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.422 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.422 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.423 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.423 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.423 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.423 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.423 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.424 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.424 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.424 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.424 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.425 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.425 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.425 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.425 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.425 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.426 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.426 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.426 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.426 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.427 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.427 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.427 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.427 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.427 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.428 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.428 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.428 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.428 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.429 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.429 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.429 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.429 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.429 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.430 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.430 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.430 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.430 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.431 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.431 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.431 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.431 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.431 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.432 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.432 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.432 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.432 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.432 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.433 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.433 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.433 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.433 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.434 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.434 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.434 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.434 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.434 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.435 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.435 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.435 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.435 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.436 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.436 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.436 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.436 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.436 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.437 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.437 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.437 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.437 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.437 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.438 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.438 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.438 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.438 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.438 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.439 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.439 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.439 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.439 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.440 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.440 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.440 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.440 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.440 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.440 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.441 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.441 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.441 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.441 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.441 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.442 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.442 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.442 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.442 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.442 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.443 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.443 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.443 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.443 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.443 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.444 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.444 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.444 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.444 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.444 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.445 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.445 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.445 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.445 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.445 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.446 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.446 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.446 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.446 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.447 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.447 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.447 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.447 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.447 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.448 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.448 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.448 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.448 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.448 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.449 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.449 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.449 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.449 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.449 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.450 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.450 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.450 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.450 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.450 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.451 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.451 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.451 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.451 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.451 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.452 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.452 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.452 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.452 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.452 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.453 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.453 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.453 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.453 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.453 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.454 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.454 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.454 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.454 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.454 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.455 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.455 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.455 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.455 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.455 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.456 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.456 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.456 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.456 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.457 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.457 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.457 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.457 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.457 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.457 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.458 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.458 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.458 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.458 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.458 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.458 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.459 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.459 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.459 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.459 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.460 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.460 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.460 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.460 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.460 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.460 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.461 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.461 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.461 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.461 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.461 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.462 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.462 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:15 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.462 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.462 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.462 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.463 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.463 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.463 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.463 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.463 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.464 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.464 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.464 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.464 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.464 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.465 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.465 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.465 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.465 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.465 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.466 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.466 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.466 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.466 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.466 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.467 253830 WARNING oslo_config.cfg [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 26 10:01:15 compute-0 nova_compute[253826]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 26 10:01:15 compute-0 nova_compute[253826]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 26 10:01:15 compute-0 nova_compute[253826]: and ``live_migration_inbound_addr`` respectively.
Jan 26 10:01:15 compute-0 nova_compute[253826]: ).  Its value may be silently ignored in the future.
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.467 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.467 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.468 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.468 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.468 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.468 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.469 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.469 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.469 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.469 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.469 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.470 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.470 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.470 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.470 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.470 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.471 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.471 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.471 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rbd_secret_uuid        = 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.471 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.471 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.472 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.472 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.472 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.472 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.472 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.473 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.473 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.473 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.473 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.474 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.474 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.474 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.474 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.475 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.475 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.475 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.475 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.475 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.476 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.476 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.476 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.476 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.476 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.477 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.477 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.477 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.477 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.477 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.478 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.478 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.478 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.478 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.479 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.479 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.479 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.479 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.479 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.479 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.480 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.480 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.480 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.480 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.480 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.480 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.481 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.481 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.481 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.481 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.481 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.482 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.482 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.482 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.482 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.483 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.483 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.483 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.483 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.483 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.484 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.484 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.484 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.484 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.484 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.485 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.485 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.485 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.485 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.485 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.486 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.486 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.486 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.486 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.487 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.487 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.487 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.487 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.487 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.488 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.488 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.488 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.488 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.488 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.489 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.489 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.489 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.489 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.489 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.490 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.490 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.490 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.490 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.490 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.490 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.491 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.491 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.491 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.491 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.491 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.491 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:01:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:15.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.491 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.492 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.492 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.492 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.492 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.493 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.493 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.493 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.493 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.493 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.493 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.493 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.494 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.494 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.494 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.494 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.494 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.494 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.494 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.495 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.495 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.495 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.495 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.495 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.495 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.496 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.497 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.497 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.497 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.497 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.497 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.497 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.497 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.498 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.498 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.498 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.498 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.498 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.498 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.498 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.499 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.499 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.499 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.499 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.499 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.499 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.500 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.500 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.500 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.500 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.500 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.500 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.500 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.501 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.501 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.501 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.501 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.501 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.501 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.501 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.502 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.502 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.502 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.502 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.502 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.502 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.502 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.503 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.503 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.503 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.503 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.503 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.503 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.504 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.504 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.504 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.504 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.504 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.504 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.504 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.505 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.505 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.505 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.505 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.505 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.505 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.506 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.506 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.506 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.506 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.506 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.506 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.506 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.507 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.508 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.509 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.509 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.509 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.509 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.509 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.509 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.509 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.510 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.510 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.510 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.510 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.510 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.510 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.511 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.511 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.511 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.511 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.511 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.511 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.511 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.512 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.512 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.512 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.512 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.512 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.512 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.512 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.513 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.514 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.514 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.514 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.514 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.514 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.514 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.514 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.515 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.515 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.515 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.515 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.515 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.515 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.515 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.516 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.516 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.516 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.516 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.516 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.516 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.517 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.517 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.517 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.517 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.517 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.517 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.517 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.518 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.518 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.518 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.518 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.518 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.518 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.519 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.519 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.519 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.519 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.519 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.519 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.519 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.520 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.521 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.521 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.521 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.521 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.521 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.521 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.521 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.522 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.522 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.522 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.522 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.522 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.522 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.523 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.523 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.523 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.523 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.523 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.523 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.524 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.524 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.524 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.524 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.524 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.524 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.524 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.525 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.525 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.525 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.525 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.525 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.525 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.525 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.526 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.526 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.526 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.526 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.526 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.526 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.526 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.527 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.527 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.527 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.527 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.527 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.527 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.527 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.528 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.529 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.529 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.529 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.529 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.529 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.529 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.529 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.530 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.530 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.530 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.530 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.530 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.530 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.530 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.531 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.531 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.531 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.531 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.531 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.531 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.531 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.532 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.532 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.532 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.532 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.532 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.532 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.532 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.533 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.533 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.533 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.533 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.533 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.533 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.533 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.534 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.534 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.534 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.534 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.534 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.534 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.534 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.535 253830 DEBUG oslo_service.service [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.537 253830 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.547 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.548 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.548 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.548 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 26 10:01:15 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 10:01:15 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.621 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f25dbeb25b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.625 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f25dbeb25b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.626 253830 INFO nova.virt.libvirt.driver [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Connection event '1' reason 'None'
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.637 253830 WARNING nova.virt.libvirt.driver [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 26 10:01:15 compute-0 nova_compute[253826]: 2026-01-26 10:01:15.638 253830 DEBUG nova.virt.libvirt.volume.mount [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 26 10:01:15 compute-0 python3.9[254347]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 10:01:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:15.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100116 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:01:16 compute-0 ceph-mon[74456]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.601 253830 INFO nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Libvirt host capabilities <capabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]: 
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <host>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <uuid>e1437fe8-638e-4e57-ae56-ce26d7011781</uuid>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <arch>x86_64</arch>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model>EPYC-Rome-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <vendor>AMD</vendor>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <microcode version='16777317'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <signature family='23' model='49' stepping='0'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='x2apic'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='tsc-deadline'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='osxsave'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='hypervisor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='tsc_adjust'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='spec-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='stibp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='arch-capabilities'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='cmp_legacy'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='topoext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='virt-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='lbrv'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='tsc-scale'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='vmcb-clean'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='pause-filter'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='pfthreshold'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='svme-addr-chk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='rdctl-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='skip-l1dfl-vmentry'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='mds-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature name='pschange-mc-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <pages unit='KiB' size='4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <pages unit='KiB' size='2048'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <pages unit='KiB' size='1048576'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <power_management>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <suspend_mem/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </power_management>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <iommu support='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <migration_features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <live/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <uri_transports>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <uri_transport>tcp</uri_transport>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <uri_transport>rdma</uri_transport>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </uri_transports>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </migration_features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <topology>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <cells num='1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <cell id='0'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           <memory unit='KiB'>7864308</memory>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           <pages unit='KiB' size='2048'>0</pages>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           <distances>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <sibling id='0' value='10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           </distances>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           <cpus num='8'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:           </cpus>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         </cell>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </cells>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </topology>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <cache>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </cache>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <secmodel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model>selinux</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <doi>0</doi>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </secmodel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <secmodel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model>dac</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <doi>0</doi>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </secmodel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </host>
Jan 26 10:01:16 compute-0 nova_compute[253826]: 
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <guest>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <os_type>hvm</os_type>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <arch name='i686'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <wordsize>32</wordsize>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <domain type='qemu'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <domain type='kvm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </arch>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <pae/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <nonpae/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <acpi default='on' toggle='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <apic default='on' toggle='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <cpuselection/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <deviceboot/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <disksnapshot default='on' toggle='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <externalSnapshot/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </guest>
Jan 26 10:01:16 compute-0 nova_compute[253826]: 
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <guest>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <os_type>hvm</os_type>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <arch name='x86_64'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <wordsize>64</wordsize>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <domain type='qemu'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <domain type='kvm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </arch>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <acpi default='on' toggle='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <apic default='on' toggle='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <cpuselection/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <deviceboot/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <disksnapshot default='on' toggle='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <externalSnapshot/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </guest>
Jan 26 10:01:16 compute-0 nova_compute[253826]: 
Jan 26 10:01:16 compute-0 nova_compute[253826]: </capabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]: 
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.607 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.626 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 26 10:01:16 compute-0 nova_compute[253826]: <domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <domain>kvm</domain>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <arch>i686</arch>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <vcpu max='240'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <iothreads supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <os supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='firmware'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <loader supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>rom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pflash</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='readonly'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>yes</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='secure'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </loader>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </os>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='maximumMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <vendor>AMD</vendor>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='succor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='custom' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:16] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:16] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:16 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <memoryBacking supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='sourceType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>anonymous</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>memfd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </memoryBacking>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <disk supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='diskDevice'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>disk</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cdrom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>floppy</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>lun</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ide</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>fdc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>sata</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </disk>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <graphics supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vnc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egl-headless</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </graphics>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <video supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='modelType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vga</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cirrus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>none</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>bochs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ramfb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </video>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hostdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='mode'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>subsystem</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='startupPolicy'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>mandatory</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>requisite</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>optional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='subsysType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pci</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='capsType'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='pciBackend'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hostdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <rng supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>random</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </rng>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <filesystem supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='driverType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>path</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>handle</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtiofs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </filesystem>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tpm supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-tis</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-crb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emulator</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>external</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendVersion'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>2.0</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </tpm>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <redirdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </redirdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <channel supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </channel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <crypto supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </crypto>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <interface supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>passt</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </interface>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <panic supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>isa</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>hyperv</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </panic>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <console supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>null</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dev</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pipe</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stdio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>udp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tcp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu-vdagent</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </console>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <gic supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <genid supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backup supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <async-teardown supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <s390-pv supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <ps2 supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tdx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sev supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sgx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hyperv supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='features'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>relaxed</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vapic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>spinlocks</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vpindex</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>runtime</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>synic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stimer</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reset</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vendor_id</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>frequencies</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reenlightenment</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tlbflush</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ipi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>avic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emsr_bitmap</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>xmm_input</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hyperv>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <launchSecurity supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </features>
Jan 26 10:01:16 compute-0 nova_compute[253826]: </domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.633 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 26 10:01:16 compute-0 nova_compute[253826]: <domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <domain>kvm</domain>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <arch>i686</arch>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <vcpu max='4096'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <iothreads supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <os supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='firmware'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <loader supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>rom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pflash</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='readonly'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>yes</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='secure'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </loader>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </os>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='maximumMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <vendor>AMD</vendor>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='succor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='custom' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <memoryBacking supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='sourceType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>anonymous</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>memfd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </memoryBacking>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <disk supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='diskDevice'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>disk</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cdrom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>floppy</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>lun</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>fdc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>sata</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </disk>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <graphics supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vnc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egl-headless</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </graphics>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <video supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='modelType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vga</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cirrus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>none</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>bochs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ramfb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </video>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hostdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='mode'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>subsystem</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='startupPolicy'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>mandatory</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>requisite</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>optional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='subsysType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pci</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='capsType'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='pciBackend'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hostdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <rng supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>random</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </rng>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <filesystem supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='driverType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>path</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>handle</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtiofs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </filesystem>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tpm supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-tis</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-crb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emulator</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>external</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendVersion'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>2.0</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </tpm>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <redirdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </redirdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <channel supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </channel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <crypto supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </crypto>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <interface supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>passt</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </interface>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <panic supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>isa</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>hyperv</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </panic>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <console supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>null</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dev</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pipe</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stdio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>udp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tcp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu-vdagent</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </console>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <gic supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <genid supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backup supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <async-teardown supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <s390-pv supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <ps2 supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tdx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sev supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sgx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hyperv supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='features'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>relaxed</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vapic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>spinlocks</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vpindex</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>runtime</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>synic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stimer</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reset</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vendor_id</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>frequencies</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reenlightenment</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tlbflush</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ipi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>avic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emsr_bitmap</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>xmm_input</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hyperv>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <launchSecurity supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </features>
Jan 26 10:01:16 compute-0 nova_compute[253826]: </domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.688 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.693 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 26 10:01:16 compute-0 nova_compute[253826]: <domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <domain>kvm</domain>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <arch>x86_64</arch>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <vcpu max='240'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <iothreads supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <os supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='firmware'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <loader supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>rom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pflash</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='readonly'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>yes</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='secure'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </loader>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </os>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='maximumMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <vendor>AMD</vendor>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='succor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='custom' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <memoryBacking supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='sourceType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>anonymous</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>memfd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </memoryBacking>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <disk supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='diskDevice'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>disk</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cdrom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>floppy</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>lun</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ide</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>fdc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>sata</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </disk>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <graphics supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vnc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egl-headless</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </graphics>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <video supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='modelType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vga</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cirrus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>none</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>bochs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ramfb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </video>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hostdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='mode'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>subsystem</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='startupPolicy'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>mandatory</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>requisite</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>optional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='subsysType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pci</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='capsType'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='pciBackend'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hostdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <rng supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>random</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </rng>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <filesystem supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='driverType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>path</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>handle</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtiofs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </filesystem>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tpm supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-tis</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-crb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emulator</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>external</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendVersion'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>2.0</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </tpm>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <redirdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </redirdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <channel supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </channel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <crypto supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </crypto>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <interface supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>passt</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </interface>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <panic supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>isa</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>hyperv</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </panic>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <console supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>null</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dev</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pipe</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stdio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>udp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tcp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu-vdagent</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </console>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <gic supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <genid supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backup supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <async-teardown supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <s390-pv supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <ps2 supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tdx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sev supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sgx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hyperv supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='features'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>relaxed</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vapic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>spinlocks</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vpindex</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>runtime</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>synic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stimer</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reset</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vendor_id</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>frequencies</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reenlightenment</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tlbflush</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ipi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>avic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emsr_bitmap</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>xmm_input</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hyperv>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <launchSecurity supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </features>
Jan 26 10:01:16 compute-0 nova_compute[253826]: </domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.774 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 26 10:01:16 compute-0 nova_compute[253826]: <domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <domain>kvm</domain>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <arch>x86_64</arch>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <vcpu max='4096'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <iothreads supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <os supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='firmware'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>efi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <loader supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>rom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pflash</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='readonly'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>yes</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='secure'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>yes</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>no</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </loader>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </os>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='maximumMigratable'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>on</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>off</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <vendor>AMD</vendor>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='succor'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <mode name='custom' supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ddpd-u'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sha512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm3'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sm4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Denverton-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amd-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='auto-ibrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='perfmon-v2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbpb'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='stibp-always-on'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='EPYC-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-128'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-256'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx10-512'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='prefetchiti'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Haswell-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512er'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512pf'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fma4'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tbm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xop'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='amx-tile'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-bf16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-fp16'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bitalg'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrc'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fzrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='la57'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='taa-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ifma'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cmpccxadd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fbsdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='fsrs'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ibrs-all'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='intel-psfd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='lam'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mcdt-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pbrsb-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='psdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='serialize'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vaes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='hle'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='rtm'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512bw'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512cd'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512dq'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512f'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='avx512vl'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='invpcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pcid'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='pku'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='mpx'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='core-capability'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='split-lock-detect'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='cldemote'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='erms'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='gfni'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdir64b'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='movdiri'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='xsaves'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='athlon-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='core2duo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='coreduo-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='n270-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='ss'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <blockers model='phenom-v1'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnow'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <feature name='3dnowext'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </blockers>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </mode>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </cpu>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <memoryBacking supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <enum name='sourceType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>anonymous</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <value>memfd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </memoryBacking>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <disk supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='diskDevice'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>disk</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cdrom</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>floppy</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>lun</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>fdc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>sata</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </disk>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <graphics supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vnc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egl-headless</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </graphics>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <video supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='modelType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vga</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>cirrus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>none</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>bochs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ramfb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </video>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hostdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='mode'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>subsystem</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='startupPolicy'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>mandatory</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>requisite</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>optional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='subsysType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pci</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>scsi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='capsType'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='pciBackend'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hostdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <rng supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtio-non-transitional</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>random</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>egd</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </rng>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <filesystem supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='driverType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>path</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>handle</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>virtiofs</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </filesystem>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tpm supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-tis</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tpm-crb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emulator</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>external</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendVersion'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>2.0</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </tpm>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <redirdev supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='bus'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>usb</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </redirdev>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <channel supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </channel>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <crypto supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendModel'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>builtin</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </crypto>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <interface supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='backendType'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>default</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>passt</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </interface>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <panic supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='model'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>isa</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>hyperv</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </panic>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <console supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='type'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>null</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vc</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pty</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dev</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>file</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>pipe</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stdio</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>udp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tcp</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>unix</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>qemu-vdagent</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>dbus</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </console>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </devices>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   <features>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <gic supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <genid supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <backup supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <async-teardown supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <s390-pv supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <ps2 supported='yes'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <tdx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sev supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <sgx supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <hyperv supported='yes'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <enum name='features'>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>relaxed</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vapic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>spinlocks</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vpindex</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>runtime</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>synic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>stimer</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reset</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>vendor_id</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>frequencies</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>reenlightenment</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>tlbflush</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>ipi</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>avic</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>emsr_bitmap</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <value>xmm_input</value>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </enum>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       <defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:16 compute-0 nova_compute[253826]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:16 compute-0 nova_compute[253826]:       </defaults>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     </hyperv>
Jan 26 10:01:16 compute-0 nova_compute[253826]:     <launchSecurity supported='no'/>
Jan 26 10:01:16 compute-0 nova_compute[253826]:   </features>
Jan 26 10:01:16 compute-0 nova_compute[253826]: </domainCapabilities>
Jan 26 10:01:16 compute-0 nova_compute[253826]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.841 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.841 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.841 253830 DEBUG nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.847 253830 INFO nova.virt.libvirt.host [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Secure Boot support detected
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.849 253830 INFO nova.virt.libvirt.driver [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.849 253830 INFO nova.virt.libvirt.driver [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.859 253830 DEBUG nova.virt.libvirt.driver [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.896 253830 INFO nova.virt.node [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Determined node identity 0dd9ba26-1c92-4319-953d-4e0ed59143cf from /var/lib/nova/compute_id
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.932 253830 WARNING nova.compute.manager [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Compute nodes ['0dd9ba26-1c92-4319-953d-4e0ed59143cf'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 26 10:01:16 compute-0 nova_compute[253826]: 2026-01-26 10:01:16.994 253830 INFO nova.compute.manager [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 26 10:01:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.047 253830 WARNING nova.compute.manager [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.047 253830 DEBUG oslo_concurrency.lockutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.047 253830 DEBUG oslo_concurrency.lockutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.048 253830 DEBUG oslo_concurrency.lockutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.048 253830 DEBUG nova.compute.resource_tracker [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.048 253830 DEBUG oslo_concurrency.processutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:01:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:17.073Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:01:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:17.073Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:01:17 compute-0 sudo[254543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heqdxikbntwziwofsaxaycyvcmnqmndu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421676.5368166-3566-46247222697279/AnsiballZ_podman_container.py'
Jan 26 10:01:17 compute-0 sudo[254543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:17 compute-0 python3.9[254545]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 10:01:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:17 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:17 compute-0 sudo[254543]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:17 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:01:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:17.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:01:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/369882354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.530 253830 DEBUG oslo_concurrency.processutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:01:17 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 10:01:17 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 26 10:01:17 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/369882354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 10:01:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:17.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.827 253830 WARNING nova.virt.libvirt.driver [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.829 253830 DEBUG nova.compute.resource_tracker [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4952MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.829 253830 DEBUG oslo_concurrency.lockutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.829 253830 DEBUG oslo_concurrency.lockutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.843 253830 WARNING nova.compute.resource_tracker [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] No compute node record for compute-0.ctlplane.example.com:0dd9ba26-1c92-4319-953d-4e0ed59143cf: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 0dd9ba26-1c92-4319-953d-4e0ed59143cf could not be found.
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.865 253830 INFO nova.compute.resource_tracker [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 0dd9ba26-1c92-4319-953d-4e0ed59143cf
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.948 253830 DEBUG nova.compute.resource_tracker [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:01:17 compute-0 nova_compute[253826]: 2026-01-26 10:01:17.948 253830 DEBUG nova.compute.resource_tracker [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:01:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:18 compute-0 sudo[254763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hndcwmfvrolnrfhzcidzifpnpflojgql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421677.795208-3590-3918574860972/AnsiballZ_systemd.py'
Jan 26 10:01:18 compute-0 sudo[254763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:18 compute-0 python3.9[254765]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 10:01:18 compute-0 nova_compute[253826]: 2026-01-26 10:01:18.499 253830 INFO nova.scheduler.client.report [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] [req-cda0c706-03ed-4d8b-9661-625d731811a1] Created resource provider record via placement API for resource provider with UUID 0dd9ba26-1c92-4319-953d-4e0ed59143cf and name compute-0.ctlplane.example.com.
Jan 26 10:01:18 compute-0 systemd[1]: Stopping nova_compute container...
Jan 26 10:01:18 compute-0 nova_compute[253826]: 2026-01-26 10:01:18.532 253830 DEBUG oslo_concurrency.processutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:01:18
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.nfs', '.rgw.root', 'default.rgw.log', 'backups']
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:01:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:18 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:01:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:18 compute-0 nova_compute[253826]: 2026-01-26 10:01:18.764 253830 DEBUG oslo_concurrency.lockutils [None req-dc863b2f-de9a-44af-a8d1-0085df73c298 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:01:18 compute-0 nova_compute[253826]: 2026-01-26 10:01:18.765 253830 DEBUG oslo_concurrency.lockutils [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:01:18 compute-0 nova_compute[253826]: 2026-01-26 10:01:18.765 253830 DEBUG oslo_concurrency.lockutils [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:01:18 compute-0 nova_compute[253826]: 2026-01-26 10:01:18.766 253830 DEBUG oslo_concurrency.lockutils [None req-cbcd2c4e-f015-4504-8ded-9499031b4348 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:01:18 compute-0 ceph-mon[74456]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1791287281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/686553642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:01:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:01:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:19 compute-0 systemd[1]: libpod-87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771.scope: Deactivated successfully.
Jan 26 10:01:19 compute-0 virtqemud[254348]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 26 10:01:19 compute-0 systemd[1]: libpod-87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771.scope: Consumed 4.234s CPU time.
Jan 26 10:01:19 compute-0 virtqemud[254348]: hostname: compute-0
Jan 26 10:01:19 compute-0 virtqemud[254348]: End of file while reading data: Input/output error
Jan 26 10:01:19 compute-0 podman[254771]: 2026-01-26 10:01:19.300982956 +0000 UTC m=+0.785884411 container died 87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true)
Jan 26 10:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771-userdata-shm.mount: Deactivated successfully.
Jan 26 10:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f-merged.mount: Deactivated successfully.
Jan 26 10:01:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:19 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b24001410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:19 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:19.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:01:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:19.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:20 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:20 compute-0 sudo[254823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:01:20 compute-0 sudo[254823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:20 compute-0 sudo[254823]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:01:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:21 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:21 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b24001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000032s ======
Jan 26 10:01:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:21.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 26 10:01:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:21 compute-0 ceph-mon[74456]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:01:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:21.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:22 compute-0 podman[254771]: 2026-01-26 10:01:22.20008304 +0000 UTC m=+3.684984485 container cleanup 87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:01:22 compute-0 podman[254771]: nova_compute
Jan 26 10:01:22 compute-0 podman[254849]: nova_compute
Jan 26 10:01:22 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 26 10:01:22 compute-0 systemd[1]: Stopped nova_compute container.
Jan 26 10:01:22 compute-0 systemd[1]: Starting nova_compute container...
Jan 26 10:01:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95373d6508b1875f889c001254e666746adffba07de0841494aa9fb4bcd9742f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:22 compute-0 podman[254862]: 2026-01-26 10:01:22.546377613 +0000 UTC m=+0.253419014 container init 87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 10:01:22 compute-0 podman[254862]: 2026-01-26 10:01:22.555627041 +0000 UTC m=+0.262668422 container start 87d6f17db4c9589cc50c03d7f1672222dfc8b57b725dd2c4afea0e95ae3cc771 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 26 10:01:22 compute-0 nova_compute[254880]: + sudo -E kolla_set_configs
Jan 26 10:01:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:22 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Validating config file
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying service configuration files
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /etc/ceph
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Creating directory /etc/ceph
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Writing out command to execute
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:22 compute-0 nova_compute[254880]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 10:01:22 compute-0 nova_compute[254880]: ++ cat /run_command
Jan 26 10:01:22 compute-0 nova_compute[254880]: + CMD=nova-compute
Jan 26 10:01:22 compute-0 nova_compute[254880]: + ARGS=
Jan 26 10:01:22 compute-0 nova_compute[254880]: + sudo kolla_copy_cacerts
Jan 26 10:01:22 compute-0 nova_compute[254880]: + [[ ! -n '' ]]
Jan 26 10:01:22 compute-0 nova_compute[254880]: + . kolla_extend_start
Jan 26 10:01:22 compute-0 nova_compute[254880]: Running command: 'nova-compute'
Jan 26 10:01:22 compute-0 nova_compute[254880]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 10:01:22 compute-0 nova_compute[254880]: + umask 0022
Jan 26 10:01:22 compute-0 nova_compute[254880]: + exec nova-compute
Jan 26 10:01:22 compute-0 podman[254862]: nova_compute
Jan 26 10:01:22 compute-0 systemd[1]: Started nova_compute container.
Jan 26 10:01:22 compute-0 ceph-mon[74456]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:01:22 compute-0 sudo[254763]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:01:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:23 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:23 compute-0 sudo[255041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evmohzvudlzmuirtincvomqekmqbnlps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769421683.1248996-3617-227587290826624/AnsiballZ_podman_container.py'
Jan 26 10:01:23 compute-0 sudo[255041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:01:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:23 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:23 compute-0 python3.9[255043]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 10:01:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:23.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:23 compute-0 systemd[1]: Started libpod-conmon-b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4.scope.
Jan 26 10:01:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ff54db258c1ec2ef7bcde73a39fc56b89b780998a5a7578e9749cdc3e5d7a3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ff54db258c1ec2ef7bcde73a39fc56b89b780998a5a7578e9749cdc3e5d7a3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97ff54db258c1ec2ef7bcde73a39fc56b89b780998a5a7578e9749cdc3e5d7a3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:23 compute-0 podman[255068]: 2026-01-26 10:01:23.951477337 +0000 UTC m=+0.137244758 container init b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:01:23 compute-0 podman[255068]: 2026-01-26 10:01:23.963371844 +0000 UTC m=+0.149139245 container start b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 10:01:23 compute-0 python3.9[255043]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Applying nova statedir ownership
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 26 10:01:24 compute-0 nova_compute_init[255090]: INFO:nova_statedir:Nova statedir ownership complete
Jan 26 10:01:24 compute-0 systemd[1]: libpod-b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4.scope: Deactivated successfully.
Jan 26 10:01:24 compute-0 podman[255091]: 2026-01-26 10:01:24.032349826 +0000 UTC m=+0.037715538 container died b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Jan 26 10:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4-userdata-shm.mount: Deactivated successfully.
Jan 26 10:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-97ff54db258c1ec2ef7bcde73a39fc56b89b780998a5a7578e9749cdc3e5d7a3-merged.mount: Deactivated successfully.
Jan 26 10:01:24 compute-0 podman[255103]: 2026-01-26 10:01:24.087694726 +0000 UTC m=+0.056263115 container cleanup b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3)
Jan 26 10:01:24 compute-0 systemd[1]: libpod-conmon-b2f05eda4dc9988e3e2cd6a10f9a5dd30ad8fcb7ae5bccd1eb356560b56082b4.scope: Deactivated successfully.
Jan 26 10:01:24 compute-0 sudo[255041]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:24 compute-0 ceph-mon[74456]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:01:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/504650738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/472112996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:24 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b24001f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:24 compute-0 nova_compute[254880]: 2026-01-26 10:01:24.758 254884 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 10:01:24 compute-0 nova_compute[254880]: 2026-01-26 10:01:24.758 254884 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 10:01:24 compute-0 nova_compute[254880]: 2026-01-26 10:01:24.759 254884 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 10:01:24 compute-0 nova_compute[254880]: 2026-01-26 10:01:24.759 254884 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 26 10:01:24 compute-0 nova_compute[254880]: 2026-01-26 10:01:24.926 254884 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:01:24 compute-0 nova_compute[254880]: 2026-01-26 10:01:24.948 254884 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:01:24 compute-0 nova_compute[254880]: 2026-01-26 10:01:24.948 254884 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 26 10:01:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:01:25 compute-0 sshd-session[228575]: Connection closed by 192.168.122.30 port 33150
Jan 26 10:01:25 compute-0 sshd-session[228572]: pam_unix(sshd:session): session closed for user zuul
Jan 26 10:01:25 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 26 10:01:25 compute-0 systemd[1]: session-54.scope: Consumed 2min 2.178s CPU time.
Jan 26 10:01:25 compute-0 systemd-logind[787]: Session 54 logged out. Waiting for processes to exit.
Jan 26 10:01:25 compute-0 systemd-logind[787]: Removed session 54.
Jan 26 10:01:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/392915151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/729255989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:25 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:25 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:25.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.528 254884 INFO nova.virt.driver [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.634 254884 INFO nova.compute.provider_config [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.643 254884 DEBUG oslo_concurrency.lockutils [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.643 254884 DEBUG oslo_concurrency.lockutils [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.644 254884 DEBUG oslo_concurrency.lockutils [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.644 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.644 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.644 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.644 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.644 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.645 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.645 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.645 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.645 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.645 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.645 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.646 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.646 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.646 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.646 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.646 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.646 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.646 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.647 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.647 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.647 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.647 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.647 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.647 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.648 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.648 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.648 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.648 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.648 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.648 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.649 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.649 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.649 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.649 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.649 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.650 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.650 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.650 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.650 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.650 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.651 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.651 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.651 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.651 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.651 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.652 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.652 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.652 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.652 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.652 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.652 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.653 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.653 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.653 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.653 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.654 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.654 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.654 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.654 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.654 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.654 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.654 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.655 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.655 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.655 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.655 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.655 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.655 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.655 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.656 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.656 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.656 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.656 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.656 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.656 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.656 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.657 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.657 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.657 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.657 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.657 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.657 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.657 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.658 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.658 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.658 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.658 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.658 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.658 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.658 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.659 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.659 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.659 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.659 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.659 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.659 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.660 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.661 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.661 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.661 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.661 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.661 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.661 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.661 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.662 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.662 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.662 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.662 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.662 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.662 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.662 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.663 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.663 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.663 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.663 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.663 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.663 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.664 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.664 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.664 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.664 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.664 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.664 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.664 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.665 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.665 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.665 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.665 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.665 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.665 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.665 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.666 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.666 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.666 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.666 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.666 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.666 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.667 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.667 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.667 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.667 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.667 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.667 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.667 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.668 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.668 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.668 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.668 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.668 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.668 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.669 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.669 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.669 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.669 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.669 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.669 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.670 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.670 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.670 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.670 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.670 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.671 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.671 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.671 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.671 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.671 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.671 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.672 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.672 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.672 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.672 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.672 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.672 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.672 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.673 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.673 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.673 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.673 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.673 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.673 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.673 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.674 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.674 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.674 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.674 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.674 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.674 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.675 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.675 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.675 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.675 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.675 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.675 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.675 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.676 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.676 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.676 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.676 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.676 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.676 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.676 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.677 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.677 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.677 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.677 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.677 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.677 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.678 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.678 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.678 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.678 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.678 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.678 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.678 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.679 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.679 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.679 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.679 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.679 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.679 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.679 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.680 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.680 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.680 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.680 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.680 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.680 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.680 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.681 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.681 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.681 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.681 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.681 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.681 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.681 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.682 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.682 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.682 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.682 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.682 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.682 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.682 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.683 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.684 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.684 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.684 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.684 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.684 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.684 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.684 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.685 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.685 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.685 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.685 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.685 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.685 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.686 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.686 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.686 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.686 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.686 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.686 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.686 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.687 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.687 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.687 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.687 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.687 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.687 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.687 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.688 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.689 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.689 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.689 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.689 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.689 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.689 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.689 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.690 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.690 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.690 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.690 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.690 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.690 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.691 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.692 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.692 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.692 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.692 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.692 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.692 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.692 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.693 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.693 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.693 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.693 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.693 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.693 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.693 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.694 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.694 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.694 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.694 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.694 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.694 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.694 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.695 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.695 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.695 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.695 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.695 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.695 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.695 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.696 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.696 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.696 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.696 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.696 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.696 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.696 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.697 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.697 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.697 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.697 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.697 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.697 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.697 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.698 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.698 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.698 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.698 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.698 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.699 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.699 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.699 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.699 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.699 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.699 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.699 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.700 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.700 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.700 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.700 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.700 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.700 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.700 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.701 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.702 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.702 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.702 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.702 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.702 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.702 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.703 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.703 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.703 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.703 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.703 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.703 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.703 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.704 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.704 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.704 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.704 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.704 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.704 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.705 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.706 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.706 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.706 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.706 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.706 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.706 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.706 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.707 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.707 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.707 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.707 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.707 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.707 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.707 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.708 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.708 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.708 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.708 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.708 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.708 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.708 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.709 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.709 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.709 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.709 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.709 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.709 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.709 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.710 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.710 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.710 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.710 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.710 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.710 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.710 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.711 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.711 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.711 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.711 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.711 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.711 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.711 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.712 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.712 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.712 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.712 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.712 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.712 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.713 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.713 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.713 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.713 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.713 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.713 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.714 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.714 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.714 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.714 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.714 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.715 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.715 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.715 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.715 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.715 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.715 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.716 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.716 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.716 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.716 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.716 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.716 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.716 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.717 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.717 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.717 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.717 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.717 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.717 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.718 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.718 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.718 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.718 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.718 254884 WARNING oslo_config.cfg [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 26 10:01:25 compute-0 nova_compute[254880]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 26 10:01:25 compute-0 nova_compute[254880]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 26 10:01:25 compute-0 nova_compute[254880]: and ``live_migration_inbound_addr`` respectively.
Jan 26 10:01:25 compute-0 nova_compute[254880]: ).  Its value may be silently ignored in the future.
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.719 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.719 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.719 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.719 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.719 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.720 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.720 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.720 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.720 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.720 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.720 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.720 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.721 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.721 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.721 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.721 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.721 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.721 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.722 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rbd_secret_uuid        = 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.722 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.722 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.722 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.722 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.722 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.722 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.723 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.723 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.723 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.723 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.723 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.723 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.723 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.724 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.724 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.724 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.724 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.724 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.724 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.725 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.725 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.725 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.725 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.725 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.725 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.725 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.726 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.726 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.726 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.726 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.726 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.727 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.727 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.727 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.727 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.727 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.728 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.728 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.728 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.728 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.728 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.728 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.728 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.729 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.729 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.729 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.729 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.729 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.729 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.730 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.730 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.730 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.730 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.730 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.730 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.730 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.731 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.731 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.731 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.731 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.731 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.732 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.732 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.732 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.732 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.732 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.733 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.733 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.733 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.733 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.733 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.734 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.734 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.734 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.734 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.734 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.735 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.735 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.735 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.735 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.735 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.735 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.735 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.736 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.736 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.736 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.736 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.736 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.736 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.736 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.737 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.737 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.737 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.737 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.737 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.737 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.738 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.738 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.738 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.738 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.738 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.739 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.739 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.739 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.739 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.739 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.740 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.740 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.740 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.740 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.740 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.741 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.741 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.741 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.741 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.741 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.741 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.742 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.742 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.742 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.742 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.742 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.743 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.743 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.743 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.743 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.743 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.743 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.744 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.744 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.744 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.744 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.744 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.744 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.744 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.745 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.745 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.745 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.745 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.745 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.746 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.746 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.746 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.746 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.746 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.746 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.746 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.747 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.747 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.747 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.747 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.747 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.747 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.747 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.748 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.748 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.748 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.748 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.748 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.748 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.749 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.749 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.749 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.749 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.749 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.749 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.750 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.750 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.750 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.750 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.750 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.750 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.751 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.751 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.751 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.751 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.751 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.751 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.752 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.752 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.752 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.752 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.752 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.753 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.753 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.753 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.753 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.753 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.753 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.753 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.754 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.754 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.754 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.754 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.754 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.754 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.754 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.755 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.755 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.755 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.755 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.755 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.755 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.755 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.756 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.756 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.756 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.756 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.756 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.756 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.756 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.757 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.757 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.757 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.757 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.757 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.757 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.757 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.758 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.758 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.758 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.758 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.758 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.758 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.759 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.759 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.759 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.759 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.759 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.759 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.760 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.760 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.760 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.760 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.760 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.760 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.761 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.761 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.761 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.761 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.761 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.761 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.762 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.762 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.762 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.762 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.762 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.762 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.762 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.763 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.763 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.763 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.763 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.763 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.763 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.763 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.764 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.764 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.764 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.764 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.764 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.764 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.765 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.765 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.765 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.765 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.765 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.765 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.765 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.766 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.766 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.766 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.766 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.766 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.766 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.766 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.767 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.767 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.767 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.767 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.767 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.767 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.767 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.768 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.768 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.768 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.768 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.768 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.768 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.769 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.769 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.769 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.769 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.769 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.769 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.769 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.770 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.770 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.770 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.770 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.770 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.771 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.771 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.771 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.771 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.771 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.771 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.771 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.772 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.772 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.772 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.772 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.772 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.772 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.772 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.773 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.773 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.773 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.773 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.773 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.773 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.774 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.774 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.774 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.774 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.774 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.774 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.774 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.775 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.775 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.775 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.775 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.775 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.775 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.775 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.776 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.776 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.776 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.776 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.776 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.776 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.776 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.777 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.777 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.777 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.777 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.777 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.777 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.777 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.778 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.778 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.778 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.778 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.778 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.779 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.779 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.779 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.779 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.779 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.780 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.780 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.780 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.780 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.780 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.780 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.780 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.781 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.781 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.781 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.781 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.781 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.781 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.781 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.782 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.782 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.782 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.782 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.782 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.782 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.783 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.783 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.783 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.783 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.783 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.783 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.783 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.784 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.785 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.785 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.785 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.785 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.785 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.785 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.785 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.786 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.786 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.786 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.786 254884 DEBUG oslo_service.service [None req-9b329ea0-7645-4ae5-88c6-6a3867f63a85 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.787 254884 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.804 254884 INFO nova.virt.node [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Determined node identity 0dd9ba26-1c92-4319-953d-4e0ed59143cf from /var/lib/nova/compute_id
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.804 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.805 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.805 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.806 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.819 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0a7df9f280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.822 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0a7df9f280> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.823 254884 INFO nova.virt.libvirt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Connection event '1' reason 'None'
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.830 254884 INFO nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Libvirt host capabilities <capabilities>
Jan 26 10:01:25 compute-0 nova_compute[254880]: 
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <host>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <uuid>e1437fe8-638e-4e57-ae56-ce26d7011781</uuid>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <cpu>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <arch>x86_64</arch>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model>EPYC-Rome-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <vendor>AMD</vendor>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <microcode version='16777317'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <signature family='23' model='49' stepping='0'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='x2apic'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='tsc-deadline'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='osxsave'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='hypervisor'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='tsc_adjust'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='spec-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='stibp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='arch-capabilities'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='cmp_legacy'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='topoext'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='virt-ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='lbrv'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='tsc-scale'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='vmcb-clean'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='pause-filter'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='pfthreshold'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='svme-addr-chk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='rdctl-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='skip-l1dfl-vmentry'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='mds-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature name='pschange-mc-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <pages unit='KiB' size='4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <pages unit='KiB' size='2048'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <pages unit='KiB' size='1048576'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </cpu>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <power_management>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <suspend_mem/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </power_management>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <iommu support='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <migration_features>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <live/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <uri_transports>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <uri_transport>tcp</uri_transport>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <uri_transport>rdma</uri_transport>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </uri_transports>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </migration_features>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <topology>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <cells num='1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <cell id='0'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           <memory unit='KiB'>7864308</memory>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           <pages unit='KiB' size='2048'>0</pages>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           <distances>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <sibling id='0' value='10'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           </distances>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           <cpus num='8'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:           </cpus>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         </cell>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </cells>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </topology>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <cache>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </cache>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <secmodel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model>selinux</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <doi>0</doi>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </secmodel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <secmodel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model>dac</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <doi>0</doi>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </secmodel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </host>
Jan 26 10:01:25 compute-0 nova_compute[254880]: 
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <guest>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <os_type>hvm</os_type>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <arch name='i686'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <wordsize>32</wordsize>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <domain type='qemu'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <domain type='kvm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </arch>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <features>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <pae/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <nonpae/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <acpi default='on' toggle='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <apic default='on' toggle='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <cpuselection/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <deviceboot/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <disksnapshot default='on' toggle='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <externalSnapshot/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </features>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </guest>
Jan 26 10:01:25 compute-0 nova_compute[254880]: 
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <guest>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <os_type>hvm</os_type>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <arch name='x86_64'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <wordsize>64</wordsize>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <domain type='qemu'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <domain type='kvm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </arch>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <features>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <acpi default='on' toggle='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <apic default='on' toggle='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <cpuselection/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <deviceboot/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <disksnapshot default='on' toggle='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <externalSnapshot/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </features>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </guest>
Jan 26 10:01:25 compute-0 nova_compute[254880]: 
Jan 26 10:01:25 compute-0 nova_compute[254880]: </capabilities>
Jan 26 10:01:25 compute-0 nova_compute[254880]: 
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.838 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.843 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 26 10:01:25 compute-0 nova_compute[254880]: <domainCapabilities>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <domain>kvm</domain>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <arch>i686</arch>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <vcpu max='240'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <iothreads supported='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <os supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <enum name='firmware'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <loader supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>rom</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>pflash</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='readonly'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>yes</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='secure'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </loader>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </os>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <cpu>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='maximumMigratable'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <vendor>AMD</vendor>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='succor'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='custom' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cooperlake'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-v5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='KnightsMill'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SierraForest'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Snowridge'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='athlon'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='athlon-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='core2duo'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='core2duo-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='coreduo'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='coreduo-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='n270'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='n270-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='phenom'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='phenom-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <memoryBacking supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <enum name='sourceType'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <value>file</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <value>anonymous</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <value>memfd</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </memoryBacking>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <disk supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='diskDevice'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>disk</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>cdrom</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>floppy</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>lun</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>ide</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>fdc</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>sata</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <graphics supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>vnc</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>egl-headless</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <video supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='modelType'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>vga</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>cirrus</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>none</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>bochs</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>ramfb</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </video>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <hostdev supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='mode'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>subsystem</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='startupPolicy'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>mandatory</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>requisite</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>optional</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='subsysType'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>pci</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='capsType'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='pciBackend'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </hostdev>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <rng supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>random</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>egd</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <filesystem supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='driverType'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>path</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>handle</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>virtiofs</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </filesystem>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <tpm supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>tpm-tis</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>tpm-crb</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>emulator</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>external</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='backendVersion'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>2.0</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </tpm>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <redirdev supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </redirdev>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <channel supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </channel>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <crypto supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='model'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>qemu</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </crypto>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <interface supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='backendType'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>passt</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <panic supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>isa</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>hyperv</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </panic>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <console supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>null</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>vc</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>dev</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>file</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>pipe</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>stdio</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>udp</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>tcp</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>qemu-vdagent</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </console>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <features>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <gic supported='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <genid supported='yes'/>
Jan 26 10:01:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:25 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <backup supported='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <async-teardown supported='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <s390-pv supported='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <ps2 supported='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <tdx supported='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <sev supported='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <sgx supported='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <hyperv supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='features'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>relaxed</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>vapic</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>spinlocks</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>vpindex</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>runtime</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>synic</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>stimer</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>reset</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>vendor_id</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>frequencies</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>reenlightenment</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>tlbflush</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>ipi</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>avic</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>emsr_bitmap</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>xmm_input</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <defaults>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </defaults>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </hyperv>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <launchSecurity supported='no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </features>
Jan 26 10:01:25 compute-0 nova_compute[254880]: </domainCapabilities>
Jan 26 10:01:25 compute-0 nova_compute[254880]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.850 254884 DEBUG nova.virt.libvirt.volume.mount [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 26 10:01:25 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.851 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 26 10:01:25 compute-0 nova_compute[254880]: <domainCapabilities>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <domain>kvm</domain>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <arch>i686</arch>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <vcpu max='4096'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <iothreads supported='yes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <os supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <enum name='firmware'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <loader supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>rom</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>pflash</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='readonly'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>yes</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='secure'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </loader>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   </os>
Jan 26 10:01:25 compute-0 nova_compute[254880]:   <cpu>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <enum name='maximumMigratable'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <vendor>AMD</vendor>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='succor'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:25 compute-0 nova_compute[254880]:     <mode name='custom' supported='yes'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cooperlake'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Denverton-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='EPYC-v5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Haswell-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='KnightsMill'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:25 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:25 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='athlon'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='athlon-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='core2duo'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='core2duo-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='coreduo'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='coreduo-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='n270'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='n270-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='phenom'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='phenom-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <memoryBacking supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <enum name='sourceType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>file</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>anonymous</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>memfd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </memoryBacking>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <disk supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='diskDevice'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>disk</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>cdrom</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>floppy</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>lun</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>fdc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>sata</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <graphics supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vnc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>egl-headless</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <video supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='modelType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vga</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>cirrus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>none</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>bochs</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>ramfb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </video>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <hostdev supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='mode'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>subsystem</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='startupPolicy'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>mandatory</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>requisite</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>optional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='subsysType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pci</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='capsType'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='pciBackend'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </hostdev>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <rng supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>random</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>egd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <filesystem supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='driverType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>path</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>handle</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtiofs</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </filesystem>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <tpm supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tpm-tis</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tpm-crb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>emulator</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>external</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendVersion'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>2.0</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </tpm>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <redirdev supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </redirdev>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <channel supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </channel>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <crypto supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>qemu</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </crypto>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <interface supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>passt</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <panic supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>isa</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>hyperv</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </panic>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <console supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>null</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dev</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>file</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pipe</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>stdio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>udp</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tcp</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>qemu-vdagent</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </console>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <features>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <gic supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <genid supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <backup supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <async-teardown supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <s390-pv supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <ps2 supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <tdx supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <sev supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <sgx supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <hyperv supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='features'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>relaxed</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vapic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>spinlocks</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vpindex</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>runtime</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>synic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>stimer</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>reset</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vendor_id</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>frequencies</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>reenlightenment</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tlbflush</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>ipi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>avic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>emsr_bitmap</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>xmm_input</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <defaults>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </defaults>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </hyperv>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <launchSecurity supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </features>
Jan 26 10:01:26 compute-0 nova_compute[254880]: </domainCapabilities>
Jan 26 10:01:26 compute-0 nova_compute[254880]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.931 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:25.937 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 26 10:01:26 compute-0 nova_compute[254880]: <domainCapabilities>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <domain>kvm</domain>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <arch>x86_64</arch>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <vcpu max='240'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <iothreads supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <os supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <enum name='firmware'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <loader supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>rom</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pflash</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='readonly'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>yes</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='secure'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </loader>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </os>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <cpu>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='maximumMigratable'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <vendor>AMD</vendor>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='succor'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='custom' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cooperlake'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='KnightsMill'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='athlon'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='athlon-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='core2duo'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='core2duo-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='coreduo'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='coreduo-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='n270'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='n270-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='phenom'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='phenom-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <memoryBacking supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <enum name='sourceType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>file</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>anonymous</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>memfd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </memoryBacking>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <disk supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='diskDevice'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>disk</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>cdrom</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>floppy</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>lun</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>ide</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>fdc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>sata</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <graphics supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vnc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>egl-headless</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <video supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='modelType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vga</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>cirrus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>none</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>bochs</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>ramfb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </video>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <hostdev supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='mode'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>subsystem</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='startupPolicy'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>mandatory</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>requisite</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>optional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='subsysType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pci</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='capsType'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='pciBackend'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </hostdev>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <rng supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>random</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>egd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <filesystem supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='driverType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>path</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>handle</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtiofs</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </filesystem>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <tpm supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tpm-tis</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tpm-crb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>emulator</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>external</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendVersion'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>2.0</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </tpm>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <redirdev supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </redirdev>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <channel supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </channel>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <crypto supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>qemu</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </crypto>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <interface supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>passt</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <panic supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>isa</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>hyperv</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </panic>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <console supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>null</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dev</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>file</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pipe</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>stdio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>udp</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tcp</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>qemu-vdagent</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </console>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <features>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <gic supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <genid supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <backup supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <async-teardown supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <s390-pv supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <ps2 supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <tdx supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <sev supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <sgx supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <hyperv supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='features'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>relaxed</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vapic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>spinlocks</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vpindex</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>runtime</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>synic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>stimer</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>reset</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vendor_id</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>frequencies</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>reenlightenment</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tlbflush</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>ipi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>avic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>emsr_bitmap</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>xmm_input</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <defaults>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </defaults>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </hyperv>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <launchSecurity supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </features>
Jan 26 10:01:26 compute-0 nova_compute[254880]: </domainCapabilities>
Jan 26 10:01:26 compute-0 nova_compute[254880]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.042 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 26 10:01:26 compute-0 nova_compute[254880]: <domainCapabilities>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <domain>kvm</domain>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <arch>x86_64</arch>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <vcpu max='4096'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <iothreads supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <os supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <enum name='firmware'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>efi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <loader supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>rom</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pflash</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='readonly'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>yes</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='secure'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>yes</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>no</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </loader>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </os>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <cpu>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='host-passthrough' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='hostPassthroughMigratable'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='maximum' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='maximumMigratable'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>on</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>off</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='host-model' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <vendor>AMD</vendor>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='x2apic'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='hypervisor'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='stibp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='ssbd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='overflow-recov'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='succor'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='lbrv'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='tsc-scale'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='flushbyasid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='pause-filter'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='pfthreshold'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <feature policy='disable' name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <mode name='custom' supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Broadwell-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='ClearwaterForest-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ddpd-u'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sha512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm3'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sm4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cooperlake'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Cooperlake-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Denverton-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Dhyana-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Milan-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 rsyslogd[1007]: imjournal from <np0005595444:nova_compute>: begin to drop messages due to rate-limiting
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Rome-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-Turin-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amd-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='auto-ibrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vp2intersect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fs-gs-base-ns'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibpb-brtype'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='no-nested-data-bp'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='null-sel-clr-base'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='perfmon-v2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbpb'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='srso-user-kernel-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='stibp-always-on'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='EPYC-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='GraniteRapids-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-128'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-256'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx10-512'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='prefetchiti'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Haswell-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v6'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Icelake-Server-v7'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='IvyBridge-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='KnightsMill'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='KnightsMill-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4fmaps'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-4vnniw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512er'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512pf'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G4-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Opteron_G5-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fma4'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tbm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xop'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SapphireRapids-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='amx-tile'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-bf16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-fp16'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512-vpopcntdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bitalg'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vbmi2'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrc'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fzrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='la57'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='taa-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='tsx-ldtrk'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='SierraForest-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ifma'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-ne-convert'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx-vnni-int8'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bhi-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='bus-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cmpccxadd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fbsdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='fsrs'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ibrs-all'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='intel-psfd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ipred-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='lam'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mcdt-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pbrsb-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='psdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rrsba-ctrl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='sbdr-ssdp-no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='serialize'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vaes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='vpclmulqdq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Client-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='hle'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='rtm'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Skylake-Server-v5'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512bw'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512cd'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512dq'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512f'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='avx512vl'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='invpcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pcid'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='pku'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='mpx'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v2'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v3'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='core-capability'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='split-lock-detect'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='Snowridge-v4'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='cldemote'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='erms'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='gfni'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdir64b'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='movdiri'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='xsaves'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='athlon'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='athlon-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='core2duo'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='core2duo-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='coreduo'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='coreduo-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='n270'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='n270-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='ss'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='phenom'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <blockers model='phenom-v1'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnow'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <feature name='3dnowext'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </blockers>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </mode>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <memoryBacking supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <enum name='sourceType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>file</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>anonymous</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <value>memfd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </memoryBacking>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <disk supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='diskDevice'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>disk</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>cdrom</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>floppy</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>lun</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>fdc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>sata</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <graphics supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vnc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>egl-headless</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <video supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='modelType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vga</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>cirrus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>none</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>bochs</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>ramfb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </video>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <hostdev supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='mode'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>subsystem</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='startupPolicy'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>mandatory</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>requisite</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>optional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='subsysType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pci</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>scsi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='capsType'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='pciBackend'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </hostdev>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <rng supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtio-non-transitional</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>random</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>egd</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <filesystem supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='driverType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>path</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>handle</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>virtiofs</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </filesystem>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <tpm supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tpm-tis</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tpm-crb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>emulator</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>external</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendVersion'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>2.0</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </tpm>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <redirdev supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='bus'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>usb</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </redirdev>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <channel supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </channel>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <crypto supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>qemu</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendModel'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>builtin</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </crypto>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <interface supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='backendType'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>default</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>passt</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <panic supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='model'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>isa</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>hyperv</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </panic>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <console supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='type'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>null</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vc</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pty</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dev</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>file</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>pipe</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>stdio</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>udp</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tcp</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>unix</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>qemu-vdagent</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>dbus</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </console>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   <features>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <gic supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <vmcoreinfo supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <genid supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <backingStoreInput supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <backup supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <async-teardown supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <s390-pv supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <ps2 supported='yes'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <tdx supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <sev supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <sgx supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <hyperv supported='yes'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <enum name='features'>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>relaxed</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vapic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>spinlocks</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vpindex</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>runtime</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>synic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>stimer</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>reset</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>vendor_id</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>frequencies</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>reenlightenment</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>tlbflush</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>ipi</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>avic</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>emsr_bitmap</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <value>xmm_input</value>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </enum>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       <defaults>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <spinlocks>4095</spinlocks>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <stimer_direct>on</stimer_direct>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 10:01:26 compute-0 nova_compute[254880]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 10:01:26 compute-0 nova_compute[254880]:       </defaults>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     </hyperv>
Jan 26 10:01:26 compute-0 nova_compute[254880]:     <launchSecurity supported='no'/>
Jan 26 10:01:26 compute-0 nova_compute[254880]:   </features>
Jan 26 10:01:26 compute-0 nova_compute[254880]: </domainCapabilities>
Jan 26 10:01:26 compute-0 nova_compute[254880]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.147 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.147 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.147 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.157 254884 INFO nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Secure Boot support detected
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.159 254884 INFO nova.virt.libvirt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.159 254884 INFO nova.virt.libvirt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.169 254884 DEBUG nova.virt.libvirt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.195 254884 INFO nova.virt.node [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Determined node identity 0dd9ba26-1c92-4319-953d-4e0ed59143cf from /var/lib/nova/compute_id
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.219 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Verified node 0dd9ba26-1c92-4319-953d-4e0ed59143cf matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Jan 26 10:01:26 compute-0 ceph-mon[74456]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.301 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.385 254884 DEBUG oslo_concurrency.lockutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.386 254884 DEBUG oslo_concurrency.lockutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.386 254884 DEBUG oslo_concurrency.lockutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.386 254884 DEBUG nova.compute.resource_tracker [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.387 254884 DEBUG oslo_concurrency.processutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:01:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:26] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:01:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:26] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:01:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:26 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:01:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512875373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.821 254884 DEBUG oslo_concurrency.processutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.987 254884 WARNING nova.virt.libvirt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.988 254884 DEBUG nova.compute.resource_tracker [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4942MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.989 254884 DEBUG oslo_concurrency.lockutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:01:26 compute-0 nova_compute[254880]: 2026-01-26 10:01:26.989 254884 DEBUG oslo_concurrency.lockutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:01:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:01:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:27.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:01:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:27.074Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:01:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:27.075Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.213 254884 DEBUG nova.compute.resource_tracker [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.213 254884 DEBUG nova.compute.resource_tracker [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.228 254884 DEBUG nova.scheduler.client.report [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Refreshing inventories for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.288 254884 DEBUG nova.scheduler.client.report [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Updating ProviderTree inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.288 254884 DEBUG nova.compute.provider_tree [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.323 254884 DEBUG nova.scheduler.client.report [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Refreshing aggregate associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.344 254884 DEBUG nova.scheduler.client.report [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Refreshing trait associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 10:01:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/512875373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.362 254884 DEBUG oslo_concurrency.processutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:01:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:27 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b24001f30 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:27 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:27.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:01:27 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208342742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.836 254884 DEBUG oslo_concurrency.processutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.842 254884 DEBUG nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 26 10:01:27 compute-0 nova_compute[254880]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.842 254884 INFO nova.virt.libvirt.host [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] kernel doesn't support AMD SEV
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.843 254884 DEBUG nova.compute.provider_tree [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.843 254884 DEBUG nova.virt.libvirt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.887 254884 DEBUG nova.scheduler.client.report [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Updated inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.887 254884 DEBUG nova.compute.provider_tree [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Updating resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.888 254884 DEBUG nova.compute.provider_tree [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:01:27 compute-0 nova_compute[254880]: 2026-01-26 10:01:27.981 254884 DEBUG nova.compute.provider_tree [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Updating resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 10:01:28 compute-0 nova_compute[254880]: 2026-01-26 10:01:28.005 254884 DEBUG nova.compute.resource_tracker [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:01:28 compute-0 nova_compute[254880]: 2026-01-26 10:01:28.005 254884 DEBUG oslo_concurrency.lockutils [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:01:28 compute-0 nova_compute[254880]: 2026-01-26 10:01:28.005 254884 DEBUG nova.service [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 26 10:01:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:28 compute-0 nova_compute[254880]: 2026-01-26 10:01:28.079 254884 DEBUG nova.service [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 26 10:01:28 compute-0 nova_compute[254880]: 2026-01-26 10:01:28.079 254884 DEBUG nova.servicegroup.drivers.db [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 26 10:01:28 compute-0 ceph-mon[74456]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:01:28 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1208342742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:01:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:28 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:28 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:01:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:28 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:01:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:01:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240033c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:29.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:30 compute-0 podman[255228]: 2026-01-26 10:01:30.135035055 +0000 UTC m=+0.061553264 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 10:01:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:30 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:30 compute-0 ceph-mon[74456]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:01:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:31 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:31 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:31.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:31 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:01:32 compute-0 ceph-mon[74456]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:32 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240033c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:33 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100133 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:01:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:33 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:33.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:01:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:34 compute-0 ceph-mon[74456]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:34 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:01:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240033c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:35.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:35.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:36 compute-0 ceph-mon[74456]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:01:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:36] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 26 10:01:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:36] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 26 10:01:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:36 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:37.075Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:01:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:37.076Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:01:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:37 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:37 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:37.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100138 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:01:38 compute-0 ceph-mon[74456]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:38 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:39 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:39 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:39 compute-0 ceph-mon[74456]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 853 B/s wr, 2 op/s
Jan 26 10:01:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.003000111s ======
Jan 26 10:01:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000111s
Jan 26 10:01:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:40 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:40 compute-0 sudo[255259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:01:40 compute-0 sudo[255259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:40 compute-0 sudo[255259]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:01:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:41.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:42 compute-0 ceph-mon[74456]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:01:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:42 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:01:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:42 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:01:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001080 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:43.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:43.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:44 compute-0 ceph-mon[74456]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:01:44 compute-0 podman[255288]: 2026-01-26 10:01:44.22177424 +0000 UTC m=+0.140680989 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:01:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:44 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 10:01:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:01:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:01:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:45.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:45.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:46 compute-0 ceph-mon[74456]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Jan 26 10:01:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 26 10:01:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:46] "GET /metrics HTTP/1.1" 200 48268 "" "Prometheus/2.51.0"
Jan 26 10:01:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:46 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001080 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 10:01:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:47.077Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:01:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:47.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:47.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:48 compute-0 ceph-mon[74456]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 10:01:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:48 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:01:48 compute-0 sudo[255321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:01:48 compute-0 sudo[255321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:48 compute-0 sudo[255321]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:48 compute-0 sudo[255347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:01:48 compute-0 sudo[255347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:48 compute-0 sshd-session[255319]: Invalid user oracle from 157.245.76.178 port 38798
Jan 26 10:01:48 compute-0 sshd-session[255319]: Connection closed by invalid user oracle 157.245.76.178 port 38798 [preauth]
Jan 26 10:01:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:48 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:01:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:01:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:01:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:01:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:01:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:01:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:01:48 compute-0 sudo[255347]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 10:01:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:01:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:01:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:01:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:01:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:01:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:01:49 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:01:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:01:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:01:49 compute-0 sudo[255403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:01:49 compute-0 sudo[255403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:49 compute-0 sudo[255403]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:49 compute-0 sudo[255428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:01:49 compute-0 sudo[255428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002390 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:49 compute-0 podman[255492]: 2026-01-26 10:01:49.621226838 +0000 UTC m=+0.047292518 container create c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hermann, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 10:01:49 compute-0 systemd[1]: Started libpod-conmon-c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298.scope.
Jan 26 10:01:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:49 compute-0 podman[255492]: 2026-01-26 10:01:49.595543203 +0000 UTC m=+0.021608923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:01:49 compute-0 podman[255492]: 2026-01-26 10:01:49.700652343 +0000 UTC m=+0.126718023 container init c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:01:49 compute-0 podman[255492]: 2026-01-26 10:01:49.709572168 +0000 UTC m=+0.135637888 container start c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 10:01:49 compute-0 podman[255492]: 2026-01-26 10:01:49.71361541 +0000 UTC m=+0.139681120 container attach c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hermann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:01:49 compute-0 recursing_hermann[255509]: 167 167
Jan 26 10:01:49 compute-0 systemd[1]: libpod-c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298.scope: Deactivated successfully.
Jan 26 10:01:49 compute-0 conmon[255509]: conmon c4b730ab1b00f16fc378 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298.scope/container/memory.events
Jan 26 10:01:49 compute-0 podman[255492]: 2026-01-26 10:01:49.717177914 +0000 UTC m=+0.143243594 container died c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5977dcb07c3cd905fc5fb3f198fc9c17d224079b8e9961adfefd941829ddc716-merged.mount: Deactivated successfully.
Jan 26 10:01:49 compute-0 podman[255492]: 2026-01-26 10:01:49.756245313 +0000 UTC m=+0.182310993 container remove c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:01:49 compute-0 systemd[1]: libpod-conmon-c4b730ab1b00f16fc3783b8a870dadecf6d7b94ed247012b7bfa5d7d10908298.scope: Deactivated successfully.
Jan 26 10:01:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:49.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:49 compute-0 podman[255532]: 2026-01-26 10:01:49.959127036 +0000 UTC m=+0.050262919 container create 8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:01:49 compute-0 systemd[1]: Started libpod-conmon-8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42.scope.
Jan 26 10:01:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b5f289d8d6115cd6122daac1c5d9545cbfc13e7ce7dce15039007fe27b3d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b5f289d8d6115cd6122daac1c5d9545cbfc13e7ce7dce15039007fe27b3d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b5f289d8d6115cd6122daac1c5d9545cbfc13e7ce7dce15039007fe27b3d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b5f289d8d6115cd6122daac1c5d9545cbfc13e7ce7dce15039007fe27b3d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8b5f289d8d6115cd6122daac1c5d9545cbfc13e7ce7dce15039007fe27b3d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:50 compute-0 podman[255532]: 2026-01-26 10:01:50.03105276 +0000 UTC m=+0.122188673 container init 8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:01:50 compute-0 podman[255532]: 2026-01-26 10:01:49.94086054 +0000 UTC m=+0.031996423 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:01:50 compute-0 podman[255532]: 2026-01-26 10:01:50.037317945 +0000 UTC m=+0.128453808 container start 8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_aryabhata, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:01:50 compute-0 podman[255532]: 2026-01-26 10:01:50.041289324 +0000 UTC m=+0.132425227 container attach 8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:01:50 compute-0 ceph-mon[74456]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 26 10:01:50 compute-0 confident_aryabhata[255549]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:01:50 compute-0 confident_aryabhata[255549]: --> All data devices are unavailable
Jan 26 10:01:50 compute-0 systemd[1]: libpod-8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42.scope: Deactivated successfully.
Jan 26 10:01:50 compute-0 podman[255532]: 2026-01-26 10:01:50.386769607 +0000 UTC m=+0.477905560 container died 8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 10:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8b5f289d8d6115cd6122daac1c5d9545cbfc13e7ce7dce15039007fe27b3d5-merged.mount: Deactivated successfully.
Jan 26 10:01:50 compute-0 podman[255532]: 2026-01-26 10:01:50.441710812 +0000 UTC m=+0.532846685 container remove 8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:01:50 compute-0 systemd[1]: libpod-conmon-8367dced9d121036f043b127ebd6d7ff74d102368d861c729aa3a3be3a130c42.scope: Deactivated successfully.
Jan 26 10:01:50 compute-0 sudo[255428]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:50 compute-0 sudo[255578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:01:50 compute-0 sudo[255578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:50 compute-0 sudo[255578]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:50 compute-0 sudo[255603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:01:50 compute-0 sudo[255603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:50 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:01:51 compute-0 podman[255668]: 2026-01-26 10:01:51.122793537 +0000 UTC m=+0.102507174 container create 14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kapitsa, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:01:51 compute-0 podman[255668]: 2026-01-26 10:01:51.041562105 +0000 UTC m=+0.021275812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:01:51 compute-0 systemd[1]: Started libpod-conmon-14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de.scope.
Jan 26 10:01:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:51 compute-0 podman[255668]: 2026-01-26 10:01:51.212989076 +0000 UTC m=+0.192702743 container init 14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kapitsa, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 10:01:51 compute-0 podman[255668]: 2026-01-26 10:01:51.219531623 +0000 UTC m=+0.199245270 container start 14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:01:51 compute-0 podman[255668]: 2026-01-26 10:01:51.222994552 +0000 UTC m=+0.202708229 container attach 14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:01:51 compute-0 hopeful_kapitsa[255684]: 167 167
Jan 26 10:01:51 compute-0 systemd[1]: libpod-14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de.scope: Deactivated successfully.
Jan 26 10:01:51 compute-0 podman[255668]: 2026-01-26 10:01:51.229616071 +0000 UTC m=+0.209329718 container died 14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 10:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f461ceca389dd008b391fbce55b0e349443681b5500fb57c2f523c0e565ff361-merged.mount: Deactivated successfully.
Jan 26 10:01:51 compute-0 podman[255668]: 2026-01-26 10:01:51.303054681 +0000 UTC m=+0.282768318 container remove 14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_kapitsa, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 10:01:51 compute-0 systemd[1]: libpod-conmon-14caed9592ee0753f91552f4ca88af90cf3a34ff30086629e4285c038b8b72de.scope: Deactivated successfully.
Jan 26 10:01:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:51 compute-0 podman[255707]: 2026-01-26 10:01:51.471921317 +0000 UTC m=+0.037906015 container create 26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ride, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 26 10:01:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:51 compute-0 systemd[1]: Started libpod-conmon-26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e.scope.
Jan 26 10:01:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:51.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc6126328d0e354b393aafb2aec6f3ef26dc25fe83e783f7769d0c4ac8ac36f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:51 compute-0 podman[255707]: 2026-01-26 10:01:51.456400304 +0000 UTC m=+0.022385022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc6126328d0e354b393aafb2aec6f3ef26dc25fe83e783f7769d0c4ac8ac36f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc6126328d0e354b393aafb2aec6f3ef26dc25fe83e783f7769d0c4ac8ac36f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc6126328d0e354b393aafb2aec6f3ef26dc25fe83e783f7769d0c4ac8ac36f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:51 compute-0 podman[255707]: 2026-01-26 10:01:51.57124506 +0000 UTC m=+0.137229778 container init 26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ride, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:01:51 compute-0 podman[255707]: 2026-01-26 10:01:51.576969525 +0000 UTC m=+0.142954223 container start 26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 10:01:51 compute-0 podman[255707]: 2026-01-26 10:01:51.580900743 +0000 UTC m=+0.146885461 container attach 26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ride, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 26 10:01:51 compute-0 competent_ride[255723]: {
Jan 26 10:01:51 compute-0 competent_ride[255723]:     "0": [
Jan 26 10:01:51 compute-0 competent_ride[255723]:         {
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "devices": [
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "/dev/loop3"
Jan 26 10:01:51 compute-0 competent_ride[255723]:             ],
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "lv_name": "ceph_lv0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "lv_size": "21470642176",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "name": "ceph_lv0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "tags": {
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.cluster_name": "ceph",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.crush_device_class": "",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.encrypted": "0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.osd_id": "0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.type": "block",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.vdo": "0",
Jan 26 10:01:51 compute-0 competent_ride[255723]:                 "ceph.with_tpm": "0"
Jan 26 10:01:51 compute-0 competent_ride[255723]:             },
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "type": "block",
Jan 26 10:01:51 compute-0 competent_ride[255723]:             "vg_name": "ceph_vg0"
Jan 26 10:01:51 compute-0 competent_ride[255723]:         }
Jan 26 10:01:51 compute-0 competent_ride[255723]:     ]
Jan 26 10:01:51 compute-0 competent_ride[255723]: }
Jan 26 10:01:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:51.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:51 compute-0 systemd[1]: libpod-26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e.scope: Deactivated successfully.
Jan 26 10:01:51 compute-0 podman[255707]: 2026-01-26 10:01:51.861519508 +0000 UTC m=+0.427504236 container died 26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ride, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddc6126328d0e354b393aafb2aec6f3ef26dc25fe83e783f7769d0c4ac8ac36f-merged.mount: Deactivated successfully.
Jan 26 10:01:51 compute-0 podman[255707]: 2026-01-26 10:01:51.905097295 +0000 UTC m=+0.471081993 container remove 26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_ride, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:01:51 compute-0 systemd[1]: libpod-conmon-26be7cfabc373b31d95a339126cd067f638a24129fc448434e73a2344e6ee07e.scope: Deactivated successfully.
Jan 26 10:01:51 compute-0 sudo[255603]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:52 compute-0 sudo[255743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:01:52 compute-0 sudo[255743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:52 compute-0 sudo[255743]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:52 compute-0 sudo[255768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:01:52 compute-0 sudo[255768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:52 compute-0 ceph-mon[74456]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:01:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:52 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:52 compute-0 podman[255835]: 2026-01-26 10:01:52.441056127 +0000 UTC m=+0.021759558 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:01:52 compute-0 podman[255835]: 2026-01-26 10:01:52.78360189 +0000 UTC m=+0.364305331 container create fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:01:52 compute-0 systemd[1]: Started libpod-conmon-fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12.scope.
Jan 26 10:01:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:52 compute-0 podman[255835]: 2026-01-26 10:01:52.875277465 +0000 UTC m=+0.455980906 container init fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:01:52 compute-0 podman[255835]: 2026-01-26 10:01:52.884590825 +0000 UTC m=+0.465294246 container start fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:01:52 compute-0 podman[255835]: 2026-01-26 10:01:52.888350427 +0000 UTC m=+0.469054068 container attach fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:01:52 compute-0 funny_hypatia[255852]: 167 167
Jan 26 10:01:52 compute-0 systemd[1]: libpod-fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12.scope: Deactivated successfully.
Jan 26 10:01:52 compute-0 conmon[255852]: conmon fa2f1e09e0a2eb01ed02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12.scope/container/memory.events
Jan 26 10:01:52 compute-0 podman[255835]: 2026-01-26 10:01:52.893592963 +0000 UTC m=+0.474296394 container died fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 10:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe69a3afdeb6667ca0ce4f2ca9bc2fe7886c1318c87ab96b1d03eff395e909f0-merged.mount: Deactivated successfully.
Jan 26 10:01:52 compute-0 podman[255835]: 2026-01-26 10:01:52.930804132 +0000 UTC m=+0.511507553 container remove fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:01:52 compute-0 systemd[1]: libpod-conmon-fa2f1e09e0a2eb01ed028a39492072cf722487fc84c5d15adb17bca2ec2f1d12.scope: Deactivated successfully.
Jan 26 10:01:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:01:53 compute-0 podman[255877]: 2026-01-26 10:01:53.128033204 +0000 UTC m=+0.052197624 container create 6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jang, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 10:01:53 compute-0 systemd[1]: Started libpod-conmon-6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24.scope.
Jan 26 10:01:53 compute-0 podman[255877]: 2026-01-26 10:01:53.100963666 +0000 UTC m=+0.025128096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:01:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed9b60e3d463a26bea9b1e14c5b3e934a6cae22df2186e7b9e021823531c40e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed9b60e3d463a26bea9b1e14c5b3e934a6cae22df2186e7b9e021823531c40e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed9b60e3d463a26bea9b1e14c5b3e934a6cae22df2186e7b9e021823531c40e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed9b60e3d463a26bea9b1e14c5b3e934a6cae22df2186e7b9e021823531c40e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:01:53 compute-0 podman[255877]: 2026-01-26 10:01:53.22215119 +0000 UTC m=+0.146315650 container init 6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jang, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:01:53 compute-0 podman[255877]: 2026-01-26 10:01:53.229253537 +0000 UTC m=+0.153417977 container start 6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 10:01:53 compute-0 podman[255877]: 2026-01-26 10:01:53.233431514 +0000 UTC m=+0.157595974 container attach 6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 10:01:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1793256565' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:01:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1793256565' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:01:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/289274094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:01:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/289274094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:01:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3723518383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:01:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3723518383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:01:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100153 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:01:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30002390 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:53.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:53 compute-0 lvm[255967]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:01:53 compute-0 lvm[255967]: VG ceph_vg0 finished
Jan 26 10:01:53 compute-0 goofy_jang[255893]: {}
Jan 26 10:01:53 compute-0 systemd[1]: libpod-6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24.scope: Deactivated successfully.
Jan 26 10:01:53 compute-0 systemd[1]: libpod-6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24.scope: Consumed 1.124s CPU time.
Jan 26 10:01:53 compute-0 podman[255877]: 2026-01-26 10:01:53.951072843 +0000 UTC m=+0.875237243 container died 6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jang, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:01:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ed9b60e3d463a26bea9b1e14c5b3e934a6cae22df2186e7b9e021823531c40e-merged.mount: Deactivated successfully.
Jan 26 10:01:53 compute-0 podman[255877]: 2026-01-26 10:01:53.996121696 +0000 UTC m=+0.920286106 container remove 6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jang, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:01:54 compute-0 systemd[1]: libpod-conmon-6a2dbfb65f5bb7ae19b276ac2f0d84f13169ba6deab82b00e69bc1fe0ce3dd24.scope: Deactivated successfully.
Jan 26 10:01:54 compute-0 sudo[255768]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:01:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:01:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:54 compute-0 sudo[255983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:01:54 compute-0 sudo[255983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:01:54 compute-0 sudo[255983]: pam_unix(sudo:session): session closed for user root
Jan 26 10:01:54 compute-0 ceph-mon[74456]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:01:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:54 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:01:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:54 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:01:54.685 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:01:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:01:54.686 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:01:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:01:54.686 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:01:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:01:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:55.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:01:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:55.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:01:56 compute-0 ceph-mon[74456]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 10:01:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:56] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 10:01:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:01:56] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Jan 26 10:01:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:56 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30003130 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:01:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:01:57.078Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:01:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:57.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:57.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:01:58 compute-0 ceph-mon[74456]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:01:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:58 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:01:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30003130 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:01:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:01:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:01:59.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:01:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:01:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:01:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:01:59.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:00 compute-0 ceph-mon[74456]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:02:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:00 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:00 compute-0 sudo[256016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:02:00 compute-0 sudo[256016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:00 compute-0 sudo[256016]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:00 compute-0 podman[256040]: 2026-01-26 10:02:00.977667991 +0000 UTC m=+0.056234524 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 26 10:02:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:02:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30003130 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:01.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:01.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:02 compute-0 ceph-mon[74456]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:02:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:02 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:02:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000036s ======
Jan 26 10:02:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:03.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000036s
Jan 26 10:02:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:02:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:03.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:04 compute-0 ceph-mon[74456]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:02:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:04 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:02:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:05 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:05 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:05.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:05.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:06] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:06 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:06 compute-0 ceph-mon[74456]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:02:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:07.079Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:02:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:07 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:07 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:07.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:02:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:07.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:02:07 compute-0 ceph-mon[74456]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:08 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:09 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:09 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:02:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:02:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:09.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:10 compute-0 ceph-mon[74456]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:10 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:02:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:11 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:11 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:11.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:12 compute-0 ceph-mon[74456]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:02:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:12 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:13 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:13 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:13.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:14 compute-0 ceph-mon[74456]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:14 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:02:15 compute-0 podman[256078]: 2026-01-26 10:02:15.177268299 +0000 UTC m=+0.108253088 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 10:02:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:15 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:15 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:15.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:15.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:16 compute-0 ceph-mon[74456]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:02:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:16] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:16 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00000b60 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:17.080Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:02:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:17.080Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:02:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:17 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001090 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:17 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:17.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:17.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:18 compute-0 ceph-mon[74456]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:02:18
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.nfs', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.log']
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:02:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:18 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:02:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:02:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:02:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:19 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:19 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001230 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:19.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:02:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:19.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:02:20 compute-0 nova_compute[254880]: 2026-01-26 10:02:20.081 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:20 compute-0 nova_compute[254880]: 2026-01-26 10:02:20.102 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:20 compute-0 ceph-mon[74456]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:20 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:20 compute-0 sudo[256110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:02:20 compute-0 sudo[256110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:20 compute-0 sudo[256110]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 10:02:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:21 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:21 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:21.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:21.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:21 compute-0 ceph-mon[74456]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 10:02:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:22 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:22 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2408861058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:22 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/555301251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 10:02:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:23 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:23 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000037s ======
Jan 26 10:02:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:23.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 26 10:02:24 compute-0 ceph-mon[74456]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 10:02:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3685014642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4273966476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:24 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:24 compute-0 nova_compute[254880]: 2026-01-26 10:02:24.960 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:24 compute-0 nova_compute[254880]: 2026-01-26 10:02:24.960 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:24 compute-0 nova_compute[254880]: 2026-01-26 10:02:24.961 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:02:24 compute-0 nova_compute[254880]: 2026-01-26 10:02:24.961 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:02:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.231 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.231 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.232 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.232 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.232 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.232 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.233 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.233 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.233 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.350 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.350 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.350 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.350 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.351 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:02:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:25 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:25 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:25.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:02:25 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980237738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.830 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:02:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:25.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.994 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.995 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4952MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.995 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:02:25 compute-0 nova_compute[254880]: 2026-01-26 10:02:25.995 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:02:26 compute-0 ceph-mon[74456]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1980237738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:26] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:26] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:26 compute-0 nova_compute[254880]: 2026-01-26 10:02:26.651 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:02:26 compute-0 nova_compute[254880]: 2026-01-26 10:02:26.652 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:02:26 compute-0 nova_compute[254880]: 2026-01-26 10:02:26.676 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:02:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:26 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:27.081Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:02:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:27.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:02:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:27.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:02:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:02:27 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992081513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:27 compute-0 nova_compute[254880]: 2026-01-26 10:02:27.126 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:02:27 compute-0 nova_compute[254880]: 2026-01-26 10:02:27.133 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:02:27 compute-0 nova_compute[254880]: 2026-01-26 10:02:27.208 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:02:27 compute-0 nova_compute[254880]: 2026-01-26 10:02:27.210 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:02:27 compute-0 nova_compute[254880]: 2026-01-26 10:02:27.210 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:02:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:27 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:27 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:27.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3992081513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:02:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:27.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:28 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00002b10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:28 compute-0 ceph-mon[74456]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:29.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:29 compute-0 ceph-mon[74456]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:30 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:31 compute-0 sshd-session[256189]: Invalid user oracle from 157.245.76.178 port 50164
Jan 26 10:02:31 compute-0 podman[256191]: 2026-01-26 10:02:31.16328117 +0000 UTC m=+0.096576577 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 26 10:02:31 compute-0 sshd-session[256189]: Connection closed by invalid user oracle 157.245.76.178 port 50164 [preauth]
Jan 26 10:02:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:31 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00002b10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:31 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:31.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:32 compute-0 ceph-mon[74456]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 10:02:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:32 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 26 10:02:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:33 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:33 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00002b10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:02:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:34 compute-0 ceph-mon[74456]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 26 10:02:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:34 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 26 10:02:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:35.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:35.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:36 compute-0 ceph-mon[74456]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 26 10:02:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:36] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 10:02:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:36] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 10:02:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:36 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:37.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:02:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:37.082Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:02:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:37.082Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:02:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:37 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:37 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:02:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:37.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:02:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:38 compute-0 ceph-mon[74456]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:38 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:39 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:39 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:39.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:40 compute-0 ceph-mon[74456]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:40 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:02:41 compute-0 sudo[256220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:02:41 compute-0 sudo[256220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:41 compute-0 sudo[256220]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:41.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:42 compute-0 ceph-mon[74456]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:02:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:42 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:43.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:44 compute-0 ceph-mon[74456]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:44 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:02:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b240044c0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:45.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:45.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:46 compute-0 podman[256249]: 2026-01-26 10:02:46.239592647 +0000 UTC m=+0.158123265 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 10:02:46 compute-0 ceph-mon[74456]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:02:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:46] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 10:02:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:46] "GET /metrics HTTP/1.1" 200 48273 "" "Prometheus/2.51.0"
Jan 26 10:02:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:46 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:47.083Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:02:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b300044d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:02:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:02:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:47.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:48 compute-0 ceph-mon[74456]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:48 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:02:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:02:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:02:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:02:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:02:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:02:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.24583 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.24614 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:02:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.24583 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 26 10:02:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c0010b0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:02:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/865647127' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 26 10:02:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3978845484' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 26 10:02:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:49.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:50 compute-0 ceph-mon[74456]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:50 compute-0 ceph-mon[74456]: from='client.24583 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:02:50 compute-0 ceph-mon[74456]: from='client.24614 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:02:50 compute-0 ceph-mon[74456]: from='client.24583 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 26 10:02:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:50 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08000b60 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:02:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:52 compute-0 ceph-mon[74456]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:02:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:52 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c0010b0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b080016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b080016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:53.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:54 compute-0 sudo[256288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:02:54 compute-0 sudo[256288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:54 compute-0 sudo[256288]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:54 compute-0 ceph-mon[74456]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:54 compute-0 sudo[256313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:02:54 compute-0 sudo[256313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:02:54.687 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:02:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:02:54.687 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:02:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:02:54.687 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:02:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:54 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:02:55 compute-0 sudo[256313]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:02:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:02:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:02:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:02:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:02:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:02:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:02:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:02:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:02:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:02:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:02:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:02:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:02:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:02:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c0010b0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:55 compute-0 sudo[256370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:02:55 compute-0 sudo[256370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:55 compute-0 sudo[256370]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b200012e0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:55 compute-0 sudo[256395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:02:55 compute-0 sudo[256395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:02:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:02:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:55.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:02:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:02:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:02:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:02:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:02:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:02:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:02:56 compute-0 podman[256463]: 2026-01-26 10:02:56.048937149 +0000 UTC m=+0.061846447 container create 0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:02:56 compute-0 systemd[1]: Started libpod-conmon-0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9.scope.
Jan 26 10:02:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:02:56 compute-0 podman[256463]: 2026-01-26 10:02:56.020758253 +0000 UTC m=+0.033667561 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:02:56 compute-0 podman[256463]: 2026-01-26 10:02:56.131523321 +0000 UTC m=+0.144432629 container init 0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:02:56 compute-0 podman[256463]: 2026-01-26 10:02:56.138172138 +0000 UTC m=+0.151081406 container start 0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 26 10:02:56 compute-0 podman[256463]: 2026-01-26 10:02:56.141792921 +0000 UTC m=+0.154702239 container attach 0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 10:02:56 compute-0 peaceful_matsumoto[256480]: 167 167
Jan 26 10:02:56 compute-0 systemd[1]: libpod-0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9.scope: Deactivated successfully.
Jan 26 10:02:56 compute-0 conmon[256480]: conmon 0fd5d7475ffe38ea7a1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9.scope/container/memory.events
Jan 26 10:02:56 compute-0 podman[256463]: 2026-01-26 10:02:56.145444004 +0000 UTC m=+0.158353262 container died 0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f28d0072588193a7433b008e09236647a6f81b7379b27613218153fada2921e-merged.mount: Deactivated successfully.
Jan 26 10:02:56 compute-0 podman[256463]: 2026-01-26 10:02:56.188684594 +0000 UTC m=+0.201593852 container remove 0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 10:02:56 compute-0 systemd[1]: libpod-conmon-0fd5d7475ffe38ea7a1cbd4a90bbf086da39cb122e3d396516c1c17c05e133f9.scope: Deactivated successfully.
Jan 26 10:02:56 compute-0 podman[256504]: 2026-01-26 10:02:56.363736136 +0000 UTC m=+0.048240432 container create 5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:02:56 compute-0 systemd[1]: Started libpod-conmon-5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130.scope.
Jan 26 10:02:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:02:56 compute-0 podman[256504]: 2026-01-26 10:02:56.341419286 +0000 UTC m=+0.025923622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f749b957808327b8a5c388fcb4969cb9a4f3e5eb1743991318cbd14bdc1200c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f749b957808327b8a5c388fcb4969cb9a4f3e5eb1743991318cbd14bdc1200c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f749b957808327b8a5c388fcb4969cb9a4f3e5eb1743991318cbd14bdc1200c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f749b957808327b8a5c388fcb4969cb9a4f3e5eb1743991318cbd14bdc1200c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f749b957808327b8a5c388fcb4969cb9a4f3e5eb1743991318cbd14bdc1200c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:56 compute-0 podman[256504]: 2026-01-26 10:02:56.448964063 +0000 UTC m=+0.133468389 container init 5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:02:56 compute-0 podman[256504]: 2026-01-26 10:02:56.457882904 +0000 UTC m=+0.142387200 container start 5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:02:56 compute-0 podman[256504]: 2026-01-26 10:02:56.461659171 +0000 UTC m=+0.146163467 container attach 5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 10:02:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:56] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:02:56] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:02:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:56 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b080016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:56 compute-0 quizzical_pike[256522]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:02:56 compute-0 quizzical_pike[256522]: --> All data devices are unavailable
Jan 26 10:02:56 compute-0 systemd[1]: libpod-5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130.scope: Deactivated successfully.
Jan 26 10:02:56 compute-0 podman[256537]: 2026-01-26 10:02:56.867340444 +0000 UTC m=+0.048966573 container died 5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f749b957808327b8a5c388fcb4969cb9a4f3e5eb1743991318cbd14bdc1200c-merged.mount: Deactivated successfully.
Jan 26 10:02:56 compute-0 podman[256537]: 2026-01-26 10:02:56.924233 +0000 UTC m=+0.105859049 container remove 5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_pike, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:02:56 compute-0 systemd[1]: libpod-conmon-5e347fcf0959916bca5209551d7fb875cf86566064d442fce2d435f8bc37e130.scope: Deactivated successfully.
Jan 26 10:02:56 compute-0 sudo[256395]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:57 compute-0 sudo[256553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:02:57 compute-0 sudo[256553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:57 compute-0 sudo[256553]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:02:57.084Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:02:57 compute-0 sudo[256578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:02:57 compute-0 sudo[256578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:57 compute-0 ceph-mon[74456]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:02:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c0010b0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:02:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:02:57 compute-0 podman[256645]: 2026-01-26 10:02:57.607485199 +0000 UTC m=+0.058676827 container create e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 10:02:57 compute-0 systemd[1]: Started libpod-conmon-e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e.scope.
Jan 26 10:02:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:02:57 compute-0 podman[256645]: 2026-01-26 10:02:57.583995577 +0000 UTC m=+0.035187255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:02:57 compute-0 podman[256645]: 2026-01-26 10:02:57.684858064 +0000 UTC m=+0.136049712 container init e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 10:02:57 compute-0 podman[256645]: 2026-01-26 10:02:57.690527294 +0000 UTC m=+0.141718922 container start e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:02:57 compute-0 podman[256645]: 2026-01-26 10:02:57.693696653 +0000 UTC m=+0.144888291 container attach e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 10:02:57 compute-0 hopeful_faraday[256662]: 167 167
Jan 26 10:02:57 compute-0 systemd[1]: libpod-e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e.scope: Deactivated successfully.
Jan 26 10:02:57 compute-0 podman[256645]: 2026-01-26 10:02:57.696454471 +0000 UTC m=+0.147646099 container died e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_faraday, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 10:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1706db34e192910ef4e748f291b5db7974245214d9b91c2d103a6645bb12f82-merged.mount: Deactivated successfully.
Jan 26 10:02:57 compute-0 podman[256645]: 2026-01-26 10:02:57.734594338 +0000 UTC m=+0.185785966 container remove e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_faraday, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 10:02:57 compute-0 systemd[1]: libpod-conmon-e2f2475812f1f35cca69e2439cfaf6c0b44c805e3c81b8910f76d9880508b38e.scope: Deactivated successfully.
Jan 26 10:02:57 compute-0 podman[256685]: 2026-01-26 10:02:57.885291683 +0000 UTC m=+0.044180339 container create 6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 26 10:02:57 compute-0 systemd[1]: Started libpod-conmon-6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9.scope.
Jan 26 10:02:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:57.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e260248ad530157c202cf1d8c2850cf947d088cefd25caedaeeb6d80d2c4a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:57 compute-0 podman[256685]: 2026-01-26 10:02:57.868393246 +0000 UTC m=+0.027281912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e260248ad530157c202cf1d8c2850cf947d088cefd25caedaeeb6d80d2c4a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e260248ad530157c202cf1d8c2850cf947d088cefd25caedaeeb6d80d2c4a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e260248ad530157c202cf1d8c2850cf947d088cefd25caedaeeb6d80d2c4a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:57 compute-0 podman[256685]: 2026-01-26 10:02:57.98510836 +0000 UTC m=+0.143997026 container init 6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:02:58 compute-0 podman[256685]: 2026-01-26 10:02:58.000551046 +0000 UTC m=+0.159439702 container start 6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 10:02:58 compute-0 podman[256685]: 2026-01-26 10:02:58.004912719 +0000 UTC m=+0.163801365 container attach 6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:02:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]: {
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:     "0": [
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:         {
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "devices": [
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "/dev/loop3"
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             ],
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "lv_name": "ceph_lv0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "lv_size": "21470642176",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "name": "ceph_lv0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "tags": {
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.cluster_name": "ceph",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.crush_device_class": "",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.encrypted": "0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.osd_id": "0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.type": "block",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.vdo": "0",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:                 "ceph.with_tpm": "0"
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             },
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "type": "block",
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:             "vg_name": "ceph_vg0"
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:         }
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]:     ]
Jan 26 10:02:58 compute-0 pedantic_swirles[256702]: }
Jan 26 10:02:58 compute-0 systemd[1]: libpod-6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9.scope: Deactivated successfully.
Jan 26 10:02:58 compute-0 ceph-mon[74456]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:58 compute-0 podman[256685]: 2026-01-26 10:02:58.361350832 +0000 UTC m=+0.520239478 container died 6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e260248ad530157c202cf1d8c2850cf947d088cefd25caedaeeb6d80d2c4a7-merged.mount: Deactivated successfully.
Jan 26 10:02:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 10:02:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/914721217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:02:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 10:02:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/914721217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:02:58 compute-0 podman[256685]: 2026-01-26 10:02:58.403519623 +0000 UTC m=+0.562408269 container remove 6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:02:58 compute-0 systemd[1]: libpod-conmon-6032a56a6079da8fa7ca807d78981eca82199438d43fa51b6047ffdf3bf84dd9.scope: Deactivated successfully.
Jan 26 10:02:58 compute-0 sudo[256578]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:58 compute-0 sudo[256723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:02:58 compute-0 sudo[256723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:58 compute-0 sudo[256723]: pam_unix(sudo:session): session closed for user root
Jan 26 10:02:58 compute-0 sudo[256748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:02:58 compute-0 sudo[256748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:02:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:58 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002aa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:58 compute-0 podman[256815]: 2026-01-26 10:02:58.928272518 +0000 UTC m=+0.035727491 container create 296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 10:02:58 compute-0 systemd[1]: Started libpod-conmon-296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1.scope.
Jan 26 10:02:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:02:58 compute-0 podman[256815]: 2026-01-26 10:02:58.991838832 +0000 UTC m=+0.099293825 container init 296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 10:02:58 compute-0 podman[256815]: 2026-01-26 10:02:58.998929512 +0000 UTC m=+0.106384465 container start 296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_tharp, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:02:59 compute-0 podman[256815]: 2026-01-26 10:02:59.001801563 +0000 UTC m=+0.109256556 container attach 296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 10:02:59 compute-0 boring_tharp[256831]: 167 167
Jan 26 10:02:59 compute-0 systemd[1]: libpod-296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1.scope: Deactivated successfully.
Jan 26 10:02:59 compute-0 conmon[256831]: conmon 296ff82364defd8ca098 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1.scope/container/memory.events
Jan 26 10:02:59 compute-0 podman[256815]: 2026-01-26 10:02:59.004718295 +0000 UTC m=+0.112173238 container died 296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 10:02:59 compute-0 podman[256815]: 2026-01-26 10:02:58.913554912 +0000 UTC m=+0.021009875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3884eab23eb11812e8ab36ee3dc1696c16e46b9e6ed0675ac9ad9e60be93cd60-merged.mount: Deactivated successfully.
Jan 26 10:02:59 compute-0 podman[256815]: 2026-01-26 10:02:59.036372949 +0000 UTC m=+0.143827892 container remove 296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_tharp, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 10:02:59 compute-0 systemd[1]: libpod-conmon-296ff82364defd8ca0988a39c41ee4e7bb9d6c9c92bcdc090ce26bb2ef1a6dc1.scope: Deactivated successfully.
Jan 26 10:02:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:02:59 compute-0 podman[256856]: 2026-01-26 10:02:59.210156885 +0000 UTC m=+0.050631200 container create 89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:02:59 compute-0 systemd[1]: Started libpod-conmon-89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85.scope.
Jan 26 10:02:59 compute-0 podman[256856]: 2026-01-26 10:02:59.18728955 +0000 UTC m=+0.027763925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:02:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062314b3fcffcc7e400a079ff15d4411015ecf67704aa64b80d5288bb6144512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062314b3fcffcc7e400a079ff15d4411015ecf67704aa64b80d5288bb6144512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062314b3fcffcc7e400a079ff15d4411015ecf67704aa64b80d5288bb6144512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062314b3fcffcc7e400a079ff15d4411015ecf67704aa64b80d5288bb6144512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:02:59 compute-0 podman[256856]: 2026-01-26 10:02:59.311085165 +0000 UTC m=+0.151559510 container init 89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:02:59 compute-0 podman[256856]: 2026-01-26 10:02:59.320085409 +0000 UTC m=+0.160559724 container start 89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 10:02:59 compute-0 podman[256856]: 2026-01-26 10:02:59.324001729 +0000 UTC m=+0.164476064 container attach 89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 10:02:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/914721217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:02:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/914721217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:02:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b080016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:02:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:02:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:02:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:02:59.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:02:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:02:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:02:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:02:59.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:02:59 compute-0 lvm[256948]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:02:59 compute-0 lvm[256948]: VG ceph_vg0 finished
Jan 26 10:02:59 compute-0 zealous_tu[256873]: {}
Jan 26 10:03:00 compute-0 systemd[1]: libpod-89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85.scope: Deactivated successfully.
Jan 26 10:03:00 compute-0 systemd[1]: libpod-89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85.scope: Consumed 1.107s CPU time.
Jan 26 10:03:00 compute-0 podman[256856]: 2026-01-26 10:03:00.016649644 +0000 UTC m=+0.857123989 container died 89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-062314b3fcffcc7e400a079ff15d4411015ecf67704aa64b80d5288bb6144512-merged.mount: Deactivated successfully.
Jan 26 10:03:00 compute-0 podman[256856]: 2026-01-26 10:03:00.079470118 +0000 UTC m=+0.919944483 container remove 89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_tu, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 10:03:00 compute-0 systemd[1]: libpod-conmon-89ff10e7638f6acd8af9438e4b535976ff6194a8282edee7cd89575bbff60e85.scope: Deactivated successfully.
Jan 26 10:03:00 compute-0 sudo[256748]: pam_unix(sudo:session): session closed for user root
Jan 26 10:03:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:03:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:03:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:03:00 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:03:00 compute-0 sudo[256965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:03:00 compute-0 sudo[256965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:03:00 compute-0 sudo[256965]: pam_unix(sudo:session): session closed for user root
Jan 26 10:03:00 compute-0 ceph-mon[74456]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:00 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:03:00 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:03:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:00 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c0010b0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:01 compute-0 sudo[256992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:03:01 compute-0 sudo[256992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:03:01 compute-0 sudo[256992]: pam_unix(sudo:session): session closed for user root
Jan 26 10:03:01 compute-0 podman[257016]: 2026-01-26 10:03:01.366335418 +0000 UTC m=+0.102841214 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 26 10:03:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20002c20 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b080016a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:03:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:01.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:03:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:01.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:02 compute-0 ceph-mon[74456]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:02 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:03.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:03:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:03.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:04 compute-0 ceph-mon[74456]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:04 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b080032f0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:05 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:05 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:05.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:03:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:05.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:03:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:03:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:03:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:06 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20003540 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:06 compute-0 ceph-mon[74456]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:03:07.085Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:03:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:07 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b080032f0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:07 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18003840 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:07.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:07.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:08 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:08 compute-0 ceph-mon[74456]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:09 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20003540 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:09 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:03:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:09.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:03:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:09.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:10 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:10 compute-0 ceph-mon[74456]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.24598 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:03:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 26 10:03:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3825936780' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.24598 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.15099 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:03:11 compute-0 ceph-mgr[74755]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 10:03:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:11 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:11 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:11.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/611339985' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 26 10:03:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3825936780' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 26 10:03:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:11.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:12 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:12 compute-0 ceph-mon[74456]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 10:03:12 compute-0 ceph-mon[74456]: from='client.24598 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:03:12 compute-0 ceph-mon[74456]: from='client.24598 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 26 10:03:12 compute-0 ceph-mon[74456]: from='client.15099 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 10:03:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:13 compute-0 sshd-session[257051]: Invalid user oracle from 157.245.76.178 port 47652
Jan 26 10:03:13 compute-0 sshd-session[257051]: Connection closed by invalid user oracle 157.245.76.178 port 47652 [preauth]
Jan 26 10:03:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:13 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:13 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:13.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:13.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:14 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20003540 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:14 compute-0 ceph-mon[74456]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:15 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:15 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:03:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:15.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:03:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:15.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:03:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:03:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:16 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:16 compute-0 ceph-mon[74456]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:03:17.086Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:03:17 compute-0 podman[257057]: 2026-01-26 10:03:17.178973417 +0000 UTC m=+0.112308503 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 26 10:03:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:17 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20003540 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:17 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:17.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:17.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:03:18
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['volumes', '.rgw.root', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.control', 'backups']
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:03:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:18 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:03:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:03:18 compute-0 ceph-mon[74456]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:03:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:03:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:19 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001250 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:19 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b20003540 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:19 compute-0 ceph-mon[74456]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:19.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:20 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:03:21 compute-0 sudo[257089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:03:21 compute-0 sudo[257089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:03:21 compute-0 sudo[257089]: pam_unix(sudo:session): session closed for user root
Jan 26 10:03:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:21 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:21 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00001090 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:03:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:21.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:03:22 compute-0 ceph-mon[74456]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:03:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:22 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:23 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:23 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:23.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:03:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:23.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:03:24 compute-0 ceph-mon[74456]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3628511201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:24 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00001090 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1818591514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/642867322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1487621882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:25 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:25 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:25.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:03:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:25.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:03:26 compute-0 ceph-mon[74456]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:26] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:03:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:26] "GET /metrics HTTP/1.1" 200 48272 "" "Prometheus/2.51.0"
Jan 26 10:03:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:26 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:03:27.088Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.203 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.204 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.227 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.227 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.227 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.247 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.247 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.247 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.248 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.248 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.248 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.248 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.248 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.249 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.270 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.271 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.271 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.271 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.271 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:03:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:27 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00001090 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:27 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:27.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:03:27 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/497130841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.730 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.889 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.890 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.890 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.890 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.982 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:03:27 compute-0 nova_compute[254880]: 2026-01-26 10:03:27.982 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:03:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:28 compute-0 nova_compute[254880]: 2026-01-26 10:03:28.001 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:03:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:28 compute-0 ceph-mon[74456]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:28 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/497130841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:03:28 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799836587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:28 compute-0 nova_compute[254880]: 2026-01-26 10:03:28.464 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:03:28 compute-0 nova_compute[254880]: 2026-01-26 10:03:28.470 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:03:28 compute-0 nova_compute[254880]: 2026-01-26 10:03:28.487 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:03:28 compute-0 nova_compute[254880]: 2026-01-26 10:03:28.489 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:03:28 compute-0 nova_compute[254880]: 2026-01-26 10:03:28.490 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:03:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:28 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:29 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/799836587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:03:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:29 compute-0 rsyslogd[1007]: imjournal: 6779 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 26 10:03:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:29 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b00001090 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:29.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:30 compute-0 ceph-mon[74456]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:30 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:03:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:31 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:31 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:32 compute-0 podman[257168]: 2026-01-26 10:03:32.118234465 +0000 UTC m=+0.054907680 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:03:32 compute-0 ceph-mon[74456]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:03:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:32 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000030a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:33 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:33 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:03:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:34 compute-0 ceph-mon[74456]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:34 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000030a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:35 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:03:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:35.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:03:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:03:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:36] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:03:36 compute-0 ceph-mon[74456]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:36 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:03:37.089Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:03:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:37 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:37 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000030a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:37.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000008s ======
Jan 26 10:03:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Jan 26 10:03:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:38 compute-0 ceph-mon[74456]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:38 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30001fa0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:39 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:39 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:39.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:03:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:03:40 compute-0 ceph-mon[74456]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:40 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000030a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:41 compute-0 sudo[257197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:03:41 compute-0 sudo[257197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:03:41 compute-0 sudo[257197]: pam_unix(sudo:session): session closed for user root
Jan 26 10:03:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30004950 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:41 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:41.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100342 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:03:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:42 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:42 compute-0 ceph-mon[74456]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:03:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:03:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000030a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:43 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30004950 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000008s ======
Jan 26 10:03:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Jan 26 10:03:44 compute-0 ceph-mon[74456]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:03:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:44 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:45 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000041a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:45.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:46.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:46 compute-0 ceph-mon[74456]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:03:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:03:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:46] "GET /metrics HTTP/1.1" 200 48270 "" "Prometheus/2.51.0"
Jan 26 10:03:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:46 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30004950 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:03:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:03:47.090Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.123748) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421827123807, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2123, "num_deletes": 251, "total_data_size": 4182831, "memory_usage": 4264352, "flush_reason": "Manual Compaction"}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421827160585, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4098766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19980, "largest_seqno": 22102, "table_properties": {"data_size": 4089234, "index_size": 6026, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19523, "raw_average_key_size": 20, "raw_value_size": 4070294, "raw_average_value_size": 4213, "num_data_blocks": 264, "num_entries": 966, "num_filter_entries": 966, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769421607, "oldest_key_time": 1769421607, "file_creation_time": 1769421827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 36920 microseconds, and 15583 cpu microseconds.
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.160666) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4098766 bytes OK
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.160693) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.163056) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.163077) EVENT_LOG_v1 {"time_micros": 1769421827163070, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.163101) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4174271, prev total WAL file size 4174271, number of live WAL files 2.
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.164943) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(4002KB)], [44(12MB)]
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421827164994, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17167046, "oldest_snapshot_seqno": -1}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5414 keys, 14965952 bytes, temperature: kUnknown
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421827270062, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 14965952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14927211, "index_size": 24103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136482, "raw_average_key_size": 25, "raw_value_size": 14826825, "raw_average_value_size": 2738, "num_data_blocks": 997, "num_entries": 5414, "num_filter_entries": 5414, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769421827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.270400) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14965952 bytes
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.271982) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.3 rd, 142.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.5 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(7.8) write-amplify(3.7) OK, records in: 5934, records dropped: 520 output_compression: NoCompression
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.272003) EVENT_LOG_v1 {"time_micros": 1769421827271993, "job": 22, "event": "compaction_finished", "compaction_time_micros": 105147, "compaction_time_cpu_micros": 50889, "output_level": 6, "num_output_files": 1, "total_output_size": 14965952, "num_input_records": 5934, "num_output_records": 5414, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421827273039, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421827275844, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.164839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.275968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.275977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.275980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.275984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:03:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:03:47.275987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:03:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:47 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:47.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:48.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:48 compute-0 ceph-mon[74456]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:03:48 compute-0 podman[257228]: 2026-01-26 10:03:48.190618489 +0000 UTC m=+0.122829486 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:03:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:03:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:48 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:03:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:03:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:03:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:03:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:03:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:03:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:03:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:03:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b30004950 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:49 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:03:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:49.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:03:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:50.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:50 compute-0 ceph-mon[74456]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:03:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:50 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b08003c10 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:50 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:03:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:03:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000041a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:51 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001bb0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:03:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:51.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:03:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:03:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:52.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:03:52 compute-0 ceph-mon[74456]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:03:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:52 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b200024d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:03:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000041a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:53.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:03:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:53 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:03:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:54 compute-0 ceph-mon[74456]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:03:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:03:54.688 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:03:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:03:54.688 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:03:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:03:54.688 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:03:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:54 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001bb0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:03:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b200024d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100355 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:03:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:55 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:03:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:55.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:03:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000008s ======
Jan 26 10:03:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:56.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Jan 26 10:03:56 compute-0 sshd-session[257264]: Invalid user oracle from 157.245.76.178 port 35602
Jan 26 10:03:56 compute-0 ceph-mon[74456]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:03:56 compute-0 sshd-session[257264]: Connection closed by invalid user oracle 157.245.76.178 port 35602 [preauth]
Jan 26 10:03:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:56] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:03:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:03:56] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:03:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:56 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000041a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:03:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:03:57.091Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:03:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001bb0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:57 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001bb0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:03:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:03:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:03:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:03:58.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:03:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:03:58 compute-0 ceph-mon[74456]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:03:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1736252094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:03:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1736252094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:03:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:58 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:03:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000041a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:03:59 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001bb0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:03:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:03:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000008s ======
Jan 26 10:03:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:03:59.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Jan 26 10:04:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000007s ======
Jan 26 10:04:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:00.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Jan 26 10:04:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:00 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:04:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:00 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:04:00 compute-0 ceph-mon[74456]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 26 10:04:00 compute-0 sudo[257272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:04:00 compute-0 sudo[257272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:00 compute-0 sudo[257272]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:00 compute-0 sudo[257297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:04:00 compute-0 sudo[257297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:00 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001bb0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 10:04:01 compute-0 sudo[257297]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:04:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:04:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:04:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:04:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:04:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:04:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:04:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:04:01 compute-0 sudo[257354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:04:01 compute-0 sudo[257354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:01 compute-0 sudo[257354]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:04:01 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:04:01 compute-0 sudo[257379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:04:01 compute-0 sudo[257379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:01 compute-0 sudo[257404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:04:01 compute-0 sudo[257404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:01 compute-0 sudo[257404]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:01 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b000041a0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 26 10:04:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:01.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 26 10:04:01 compute-0 podman[257471]: 2026-01-26 10:04:01.857717367 +0000 UTC m=+0.047979972 container create 90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 10:04:01 compute-0 systemd[1]: Started libpod-conmon-90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409.scope.
Jan 26 10:04:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:04:01 compute-0 podman[257471]: 2026-01-26 10:04:01.837931382 +0000 UTC m=+0.028194017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:04:01 compute-0 podman[257471]: 2026-01-26 10:04:01.95689253 +0000 UTC m=+0.147155185 container init 90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:04:01 compute-0 podman[257471]: 2026-01-26 10:04:01.965365183 +0000 UTC m=+0.155627768 container start 90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:04:01 compute-0 podman[257471]: 2026-01-26 10:04:01.968754698 +0000 UTC m=+0.159017343 container attach 90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:04:01 compute-0 systemd[1]: libpod-90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409.scope: Deactivated successfully.
Jan 26 10:04:01 compute-0 gifted_elion[257488]: 167 167
Jan 26 10:04:01 compute-0 conmon[257488]: conmon 90c20cfbd82c5a31bc8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409.scope/container/memory.events
Jan 26 10:04:01 compute-0 podman[257471]: 2026-01-26 10:04:01.974939341 +0000 UTC m=+0.165201936 container died 90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Jan 26 10:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-825790679d283628a738fafb1ed4b98e782fbdfecb6b3e92628c6fc4b15c957a-merged.mount: Deactivated successfully.
Jan 26 10:04:02 compute-0 podman[257471]: 2026-01-26 10:04:02.02123229 +0000 UTC m=+0.211494895 container remove 90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:04:02 compute-0 systemd[1]: libpod-conmon-90c20cfbd82c5a31bc8f87c3b25823a6b9c819babb7ab680cbb9d3d038a30409.scope: Deactivated successfully.
Jan 26 10:04:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 26 10:04:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:02.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 26 10:04:02 compute-0 podman[257512]: 2026-01-26 10:04:02.218002576 +0000 UTC m=+0.043983317 container create 1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:04:02 compute-0 systemd[1]: Started libpod-conmon-1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3.scope.
Jan 26 10:04:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cf094f7b16fdac5edadd7bff76bfcd0c04d3be27bef3912b7e7ab68debde5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cf094f7b16fdac5edadd7bff76bfcd0c04d3be27bef3912b7e7ab68debde5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cf094f7b16fdac5edadd7bff76bfcd0c04d3be27bef3912b7e7ab68debde5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cf094f7b16fdac5edadd7bff76bfcd0c04d3be27bef3912b7e7ab68debde5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cf094f7b16fdac5edadd7bff76bfcd0c04d3be27bef3912b7e7ab68debde5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:02 compute-0 podman[257512]: 2026-01-26 10:04:02.195357723 +0000 UTC m=+0.021338464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:04:02 compute-0 podman[257512]: 2026-01-26 10:04:02.300603004 +0000 UTC m=+0.126583835 container init 1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 10:04:02 compute-0 podman[257512]: 2026-01-26 10:04:02.30848683 +0000 UTC m=+0.134467541 container start 1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 10:04:02 compute-0 podman[257512]: 2026-01-26 10:04:02.311859844 +0000 UTC m=+0.137840575 container attach 1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gates, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Jan 26 10:04:02 compute-0 podman[257526]: 2026-01-26 10:04:02.325151668 +0000 UTC m=+0.064549478 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:04:02 compute-0 ceph-mon[74456]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 426 B/s wr, 2 op/s
Jan 26 10:04:02 compute-0 kind_gates[257529]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:04:02 compute-0 kind_gates[257529]: --> All data devices are unavailable
Jan 26 10:04:02 compute-0 systemd[1]: libpod-1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3.scope: Deactivated successfully.
Jan 26 10:04:02 compute-0 podman[257512]: 2026-01-26 10:04:02.654792344 +0000 UTC m=+0.480773075 container died 1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gates, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-08cf094f7b16fdac5edadd7bff76bfcd0c04d3be27bef3912b7e7ab68debde5c-merged.mount: Deactivated successfully.
Jan 26 10:04:02 compute-0 podman[257512]: 2026-01-26 10:04:02.703053765 +0000 UTC m=+0.529034486 container remove 1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_gates, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 10:04:02 compute-0 systemd[1]: libpod-conmon-1dc745cc28f0cc894fe452c9a270d98be98f7e009513d7d5bee8719939212ed3.scope: Deactivated successfully.
Jan 26 10:04:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:02 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b1c001bb0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:02 compute-0 sudo[257379]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:02 compute-0 sudo[257577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:04:02 compute-0 sudo[257577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:02 compute-0 sudo[257577]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:02 compute-0 sudo[257602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:04:02 compute-0 sudo[257602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 26 10:04:03 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:04:03.259 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:04:03 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:04:03.261 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:04:03 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:04:03.262 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:04:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:04:03 compute-0 podman[257669]: 2026-01-26 10:04:03.387381238 +0000 UTC m=+0.044544176 container create 5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 10:04:03 compute-0 systemd[1]: Started libpod-conmon-5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b.scope.
Jan 26 10:04:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:04:03 compute-0 podman[257669]: 2026-01-26 10:04:03.367683816 +0000 UTC m=+0.024846784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:04:03 compute-0 podman[257669]: 2026-01-26 10:04:03.472558005 +0000 UTC m=+0.129720943 container init 5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 26 10:04:03 compute-0 podman[257669]: 2026-01-26 10:04:03.484475786 +0000 UTC m=+0.141638704 container start 5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shaw, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:04:03 compute-0 podman[257669]: 2026-01-26 10:04:03.487667405 +0000 UTC m=+0.144830343 container attach 5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shaw, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:04:03 compute-0 laughing_shaw[257685]: 167 167
Jan 26 10:04:03 compute-0 systemd[1]: libpod-5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b.scope: Deactivated successfully.
Jan 26 10:04:03 compute-0 podman[257669]: 2026-01-26 10:04:03.492725893 +0000 UTC m=+0.149888851 container died 5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 10:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-83578ea82f311ff4e33e8a734fc3d34e50e7b99c3b27a3a2f758af6050075196-merged.mount: Deactivated successfully.
Jan 26 10:04:03 compute-0 podman[257669]: 2026-01-26 10:04:03.536476802 +0000 UTC m=+0.193639730 container remove 5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shaw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 10:04:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b200024d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:03 compute-0 systemd[1]: libpod-conmon-5439e16ee8cbaa62f3831c3c2d0f69722a4dc340011f6b4982caf12bac55881b.scope: Deactivated successfully.
Jan 26 10:04:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:03 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b200024d0 fd 44 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 26 10:04:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:03.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 26 10:04:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:04:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:03 compute-0 podman[257709]: 2026-01-26 10:04:03.732426094 +0000 UTC m=+0.043696099 container create 0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hellman, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 10:04:03 compute-0 systemd[1]: Started libpod-conmon-0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b.scope.
Jan 26 10:04:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c94d4575d7f41c4a51a8b2dfa35e45f1975a1f613056f563ee43f9c783ce0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:03 compute-0 podman[257709]: 2026-01-26 10:04:03.714016442 +0000 UTC m=+0.025286477 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c94d4575d7f41c4a51a8b2dfa35e45f1975a1f613056f563ee43f9c783ce0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c94d4575d7f41c4a51a8b2dfa35e45f1975a1f613056f563ee43f9c783ce0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c94d4575d7f41c4a51a8b2dfa35e45f1975a1f613056f563ee43f9c783ce0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:03 compute-0 podman[257709]: 2026-01-26 10:04:03.823453353 +0000 UTC m=+0.134723378 container init 0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hellman, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:04:03 compute-0 podman[257709]: 2026-01-26 10:04:03.833251818 +0000 UTC m=+0.144521823 container start 0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hellman, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:04:03 compute-0 podman[257709]: 2026-01-26 10:04:03.836360694 +0000 UTC m=+0.147630709 container attach 0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:04:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:04.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]: {
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:     "0": [
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:         {
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "devices": [
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "/dev/loop3"
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             ],
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "lv_name": "ceph_lv0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "lv_size": "21470642176",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "name": "ceph_lv0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "tags": {
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.cluster_name": "ceph",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.crush_device_class": "",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.encrypted": "0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.osd_id": "0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.type": "block",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.vdo": "0",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:                 "ceph.with_tpm": "0"
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             },
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "type": "block",
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:             "vg_name": "ceph_vg0"
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:         }
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]:     ]
Jan 26 10:04:04 compute-0 mystifying_hellman[257725]: }
Jan 26 10:04:04 compute-0 podman[257709]: 2026-01-26 10:04:04.218500893 +0000 UTC m=+0.529770908 container died 0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hellman, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 10:04:04 compute-0 systemd[1]: libpod-0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b.scope: Deactivated successfully.
Jan 26 10:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9c94d4575d7f41c4a51a8b2dfa35e45f1975a1f613056f563ee43f9c783ce0c-merged.mount: Deactivated successfully.
Jan 26 10:04:04 compute-0 podman[257709]: 2026-01-26 10:04:04.270848841 +0000 UTC m=+0.582118896 container remove 0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:04:04 compute-0 systemd[1]: libpod-conmon-0b58c9ab7e1dec4a93d486d5f1620a9272285485367a1b9af477dde424e45e8b.scope: Deactivated successfully.
Jan 26 10:04:04 compute-0 sudo[257602]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:04 compute-0 sudo[257746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:04:04 compute-0 sudo[257746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:04 compute-0 sudo[257746]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:04 compute-0 sudo[257773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:04:04 compute-0 sudo[257773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:04 compute-0 ceph-mon[74456]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 26 10:04:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:04 compute-0 kernel: ganesha.nfsd[257087]: segfault at 50 ip 00007f3bbb41a32e sp 00007f3b40ff8210 error 4 in libntirpc.so.5.8[7f3bbb3ff000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 26 10:04:04 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 10:04:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[247961]: 26/01/2026 10:04:04 : epoch 69773b3d : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3b18004550 fd 44 proxy ignored for local
Jan 26 10:04:04 compute-0 systemd[1]: Started Process Core Dump (PID 257825/UID 0).
Jan 26 10:04:04 compute-0 podman[257842]: 2026-01-26 10:04:04.878105237 +0000 UTC m=+0.047707113 container create c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:04:04 compute-0 systemd[1]: Started libpod-conmon-c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1.scope.
Jan 26 10:04:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:04:04 compute-0 podman[257842]: 2026-01-26 10:04:04.85857738 +0000 UTC m=+0.028179266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:04:04 compute-0 podman[257842]: 2026-01-26 10:04:04.955904206 +0000 UTC m=+0.125506122 container init c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 10:04:04 compute-0 podman[257842]: 2026-01-26 10:04:04.968346413 +0000 UTC m=+0.137948279 container start c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:04:04 compute-0 inspiring_mcclintock[257858]: 167 167
Jan 26 10:04:04 compute-0 systemd[1]: libpod-c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1.scope: Deactivated successfully.
Jan 26 10:04:04 compute-0 podman[257842]: 2026-01-26 10:04:04.972590555 +0000 UTC m=+0.142192441 container attach c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 10:04:04 compute-0 podman[257842]: 2026-01-26 10:04:04.973114141 +0000 UTC m=+0.142716007 container died c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 10:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-93e90a1a670739713f96b7cff36ce9b071da0cb0936eea4f41deb0aed114530b-merged.mount: Deactivated successfully.
Jan 26 10:04:05 compute-0 podman[257842]: 2026-01-26 10:04:05.014544709 +0000 UTC m=+0.184146575 container remove c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcclintock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 10:04:05 compute-0 systemd[1]: libpod-conmon-c3c66e3419b286c41f6f9e5b56ec07fc2f042b3edb17226a53f6f1e124d52ad1.scope: Deactivated successfully.
Jan 26 10:04:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:04:05 compute-0 podman[257883]: 2026-01-26 10:04:05.190744097 +0000 UTC m=+0.037971052 container create 6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:04:05 compute-0 systemd[1]: Started libpod-conmon-6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac.scope.
Jan 26 10:04:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274bcf93e212e2c8cc392a8d395eb17ea35dee92c8a6ad3492107a2a1770b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274bcf93e212e2c8cc392a8d395eb17ea35dee92c8a6ad3492107a2a1770b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274bcf93e212e2c8cc392a8d395eb17ea35dee92c8a6ad3492107a2a1770b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:05 compute-0 podman[257883]: 2026-01-26 10:04:05.176098081 +0000 UTC m=+0.023325036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98274bcf93e212e2c8cc392a8d395eb17ea35dee92c8a6ad3492107a2a1770b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:05 compute-0 podman[257883]: 2026-01-26 10:04:05.297021741 +0000 UTC m=+0.144248696 container init 6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 10:04:05 compute-0 podman[257883]: 2026-01-26 10:04:05.30668853 +0000 UTC m=+0.153915475 container start 6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:04:05 compute-0 podman[257883]: 2026-01-26 10:04:05.310356555 +0000 UTC m=+0.157583480 container attach 6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:04:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:05.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:06 compute-0 lvm[257973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:04:06 compute-0 lvm[257973]: VG ceph_vg0 finished
Jan 26 10:04:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:06.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:06 compute-0 beautiful_gates[257899]: {}
Jan 26 10:04:06 compute-0 systemd[1]: libpod-6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac.scope: Deactivated successfully.
Jan 26 10:04:06 compute-0 podman[257883]: 2026-01-26 10:04:06.097307997 +0000 UTC m=+0.944534932 container died 6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 10:04:06 compute-0 systemd[1]: libpod-6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac.scope: Consumed 1.242s CPU time.
Jan 26 10:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-98274bcf93e212e2c8cc392a8d395eb17ea35dee92c8a6ad3492107a2a1770b5-merged.mount: Deactivated successfully.
Jan 26 10:04:06 compute-0 podman[257883]: 2026-01-26 10:04:06.140412778 +0000 UTC m=+0.987639713 container remove 6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 10:04:06 compute-0 systemd[1]: libpod-conmon-6bf10fe353e50bf00a9c0f33947dce1ea5357c14aad15273f96141b72bc675ac.scope: Deactivated successfully.
Jan 26 10:04:06 compute-0 sudo[257773]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:04:06 compute-0 systemd-coredump[257828]: Process 247971 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 65:
                                                    #0  0x00007f3bbb41a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 10:04:06 compute-0 systemd[1]: systemd-coredump@12-257825-0.service: Deactivated successfully.
Jan 26 10:04:06 compute-0 systemd[1]: systemd-coredump@12-257825-0.service: Consumed 1.614s CPU time.
Jan 26 10:04:06 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 10:04:06 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 10:04:06 compute-0 podman[257997]: 2026-01-26 10:04:06.627356065 +0000 UTC m=+0.042730710 container died 5defd66b224a0d8937dd38707979a5e755fa2673724a935f93a74104c800b708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 10:04:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:04:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:06] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-51a302c9e56bdef9ff79dd2ccaecbcff7b0c76b4ce52722c388b34a51c46a63a-merged.mount: Deactivated successfully.
Jan 26 10:04:06 compute-0 podman[257997]: 2026-01-26 10:04:06.667540313 +0000 UTC m=+0.082914908 container remove 5defd66b224a0d8937dd38707979a5e755fa2673724a935f93a74104c800b708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:04:06 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 10:04:06 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 10:04:06 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.593s CPU time.
Jan 26 10:04:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:07.092Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:04:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:07.092Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:04:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:07.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:08.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100408 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:04:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 10:04:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:09.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 10:04:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:10.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100410 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:04:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 26 10:04:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:11.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 26 10:04:11 compute-0 ceph-mon[74456]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:04:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:04:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 10:04:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:12.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 10:04:12 compute-0 sudo[258043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:04:12 compute-0 sudo[258043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:12 compute-0 sudo[258043]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:12 compute-0 ceph-mon[74456]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:12 compute-0 ceph-mon[74456]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:12 compute-0 ceph-mon[74456]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:12 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:12 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 26 10:04:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 26 10:04:14 compute-0 ceph-mon[74456]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Jan 26 10:04:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:14.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 26 10:04:15 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 26 10:04:15 compute-0 ceph-mon[74456]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 26 10:04:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:15.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:16 compute-0 ceph-mon[74456]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 26 10:04:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:04:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:16] "GET /metrics HTTP/1.1" 200 48271 "" "Prometheus/2.51.0"
Jan 26 10:04:17 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 13.
Jan 26 10:04:17 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:04:17 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.593s CPU time.
Jan 26 10:04:17 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 10:04:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:17.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:04:17 compute-0 podman[258121]: 2026-01-26 10:04:17.302352574 +0000 UTC m=+0.052803543 container create b03a590a3bc4e487fb372fd232da334e5f31dbcfccc00b452bf54b70d9b53e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc2d13f1622129d0d5d85352b84c11bbab55ea2b05da4177347869fced2dd9/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc2d13f1622129d0d5d85352b84c11bbab55ea2b05da4177347869fced2dd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc2d13f1622129d0d5d85352b84c11bbab55ea2b05da4177347869fced2dd9/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc2d13f1622129d0d5d85352b84c11bbab55ea2b05da4177347869fced2dd9/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:04:17 compute-0 podman[258121]: 2026-01-26 10:04:17.362761822 +0000 UTC m=+0.113212771 container init b03a590a3bc4e487fb372fd232da334e5f31dbcfccc00b452bf54b70d9b53e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 10:04:17 compute-0 podman[258121]: 2026-01-26 10:04:17.369828031 +0000 UTC m=+0.120278960 container start b03a590a3bc4e487fb372fd232da334e5f31dbcfccc00b452bf54b70d9b53e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 10:04:17 compute-0 podman[258121]: 2026-01-26 10:04:17.277600564 +0000 UTC m=+0.028051523 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:04:17 compute-0 bash[258121]: b03a590a3bc4e487fb372fd232da334e5f31dbcfccc00b452bf54b70d9b53e38
Jan 26 10:04:17 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 10:04:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:04:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:17.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:18.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:18 compute-0 ceph-mon[74456]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:04:18
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', '.rgw.root', '.nfs', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes']
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:04:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 26 10:04:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:04:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:04:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:04:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:04:19 compute-0 podman[258181]: 2026-01-26 10:04:19.167236635 +0000 UTC m=+0.098110911 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 10:04:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:19.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:04:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 26 10:04:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:20.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 26 10:04:20 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 10:04:20 compute-0 ceph-mon[74456]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:04:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 511 B/s wr, 2 op/s
Jan 26 10:04:21 compute-0 sudo[258211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:04:21 compute-0 sudo[258211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:21 compute-0 sudo[258211]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 26 10:04:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:21.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 26 10:04:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:22.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:22 compute-0 ceph-mon[74456]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 511 B/s wr, 2 op/s
Jan 26 10:04:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:04:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:04:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:23.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:24.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:24 compute-0 ceph-mon[74456]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s
Jan 26 10:04:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3030815994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:24 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/613830193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 26 10:04:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100425 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:04:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:25.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3671987171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/456321413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:26.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:26] "GET /metrics HTTP/1.1" 200 48357 "" "Prometheus/2.51.0"
Jan 26 10:04:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:26] "GET /metrics HTTP/1.1" 200 48357 "" "Prometheus/2.51.0"
Jan 26 10:04:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:27.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:04:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 26 10:04:27 compute-0 ceph-mon[74456]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 26 10:04:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:27.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 10:04:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:28.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.491 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.492 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.492 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.492 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:04:28 compute-0 ceph-mon[74456]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.694 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.694 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.694 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.694 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.695 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.695 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.695 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.695 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.695 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.720 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.720 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.720 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.720 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:04:28 compute-0 nova_compute[254880]: 2026-01-26 10:04:28.720 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:04:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 26 10:04:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:04:29 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863012155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.166 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.368 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.370 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4947MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.370 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.370 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.447 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.447 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.464 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000021:nfs.cephfs.2: -2
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 10:04:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:29 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 10:04:29 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1863012155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:04:29 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042271784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.902 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.906 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.921 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.923 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:04:29 compute-0 nova_compute[254880]: 2026-01-26 10:04:29.923 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:04:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 10:04:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:30.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 10:04:30 compute-0 ceph-mon[74456]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 26 10:04:30 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1042271784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:04:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:30 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4001240 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 26 10:04:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:31 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b8001ac0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:31 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4698000b60 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:31.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:32.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:32 compute-0 ceph-mon[74456]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 26 10:04:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100432 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:04:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:32 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4001240 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:04:33 compute-0 podman[258308]: 2026-01-26 10:04:33.158974625 +0000 UTC m=+0.081077781 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 26 10:04:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:33 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac001b00 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:33 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80023e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:33.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:04:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 26 10:04:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:34.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 26 10:04:34 compute-0 ceph-mon[74456]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:04:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:34 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46980016a0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:04:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:35 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4002330 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:35 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac002600 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:04:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:35.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:04:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:36.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:36] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:04:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:36] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:04:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:36 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80023e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:36 compute-0 ceph-mon[74456]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 26 10:04:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:37.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:04:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 170 B/s wr, 0 op/s
Jan 26 10:04:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:37 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46980016a0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:37 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4002330 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:38.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:38 compute-0 sshd-session[258331]: Invalid user postgres from 157.245.76.178 port 42386
Jan 26 10:04:38 compute-0 sshd-session[258331]: Connection closed by invalid user postgres 157.245.76.178 port 42386 [preauth]
Jan 26 10:04:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:38 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac002600 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:38 compute-0 ceph-mon[74456]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 170 B/s wr, 0 op/s
Jan 26 10:04:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 170 B/s wr, 0 op/s
Jan 26 10:04:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:39 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80023e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:39 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46980016a0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:04:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:39.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:04:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:40 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:40 compute-0 ceph-mon[74456]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 170 B/s wr, 0 op/s
Jan 26 10:04:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:04:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=cleanup t=2026-01-26T10:04:41.422632675Z level=info msg="Completed cleanup jobs" duration=23.857226ms
Jan 26 10:04:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=plugins.update.checker t=2026-01-26T10:04:41.518776777Z level=info msg="Update check succeeded" duration=48.842767ms
Jan 26 10:04:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafana.update.checker t=2026-01-26T10:04:41.519668997Z level=info msg="Update check succeeded" duration=49.699856ms
Jan 26 10:04:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:41 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:41 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:04:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:41.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:04:41 compute-0 sudo[258337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:04:41 compute-0 sudo[258337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:04:41 compute-0 sudo[258337]: pam_unix(sudo:session): session closed for user root
Jan 26 10:04:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:04:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:04:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:42 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4698002b10 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:42 compute-0 ceph-mon[74456]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Jan 26 10:04:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:43 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:43 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:04:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:43.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:04:43 compute-0 ceph-mon[74456]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:04:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:44.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:04:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:44 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:45 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:45 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:45.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:46.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:46 compute-0 ceph-mon[74456]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:46] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:04:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:46] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:04:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:46 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:47.096Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:04:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:47 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c4000df0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:47 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:04:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:47.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:04:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:48.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:48 compute-0 ceph-mon[74456]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:04:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:48 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:04:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:04:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:04:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:04:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:04:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:04:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:04:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:49 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:49 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c4002050 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:49.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:50.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:50 compute-0 podman[258370]: 2026-01-26 10:04:50.212698953 +0000 UTC m=+0.133580050 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:04:50 compute-0 ceph-mon[74456]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:04:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:50 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 497 B/s rd, 0 op/s
Jan 26 10:04:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:51 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:51 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:51.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:52.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:52 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:52 compute-0 ceph-mon[74456]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 497 B/s rd, 0 op/s
Jan 26 10:04:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:04:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:53 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:53 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:54 compute-0 ceph-mon[74456]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:04:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:54.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:04:54.689 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:04:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:04:54.690 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:04:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:04:54.690 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:04:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:54 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:04:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:55 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c40021f0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:55 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:55.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:04:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:56.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:04:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:56] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:04:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:04:56] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:04:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:56 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:57.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:04:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:57.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:04:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:04:57.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:04:57 compute-0 ceph-mon[74456]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:04:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:04:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:57 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:57 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c4002390 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:04:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:04:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.079703) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421898079735, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 831, "num_deletes": 250, "total_data_size": 1234951, "memory_usage": 1262696, "flush_reason": "Manual Compaction"}
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421898088525, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 793278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22103, "largest_seqno": 22933, "table_properties": {"data_size": 789864, "index_size": 1194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9210, "raw_average_key_size": 20, "raw_value_size": 782456, "raw_average_value_size": 1719, "num_data_blocks": 52, "num_entries": 455, "num_filter_entries": 455, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769421828, "oldest_key_time": 1769421828, "file_creation_time": 1769421898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8879 microseconds, and 3486 cpu microseconds.
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.088577) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 793278 bytes OK
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.088600) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.091519) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.091579) EVENT_LOG_v1 {"time_micros": 1769421898091564, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.091610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1230912, prev total WAL file size 1230912, number of live WAL files 2.
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.093162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(774KB)], [47(14MB)]
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421898093253, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 15759230, "oldest_snapshot_seqno": -1}
Jan 26 10:04:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:04:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:04:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5380 keys, 12101408 bytes, temperature: kUnknown
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421898150993, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12101408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12066694, "index_size": 20140, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136156, "raw_average_key_size": 25, "raw_value_size": 11970687, "raw_average_value_size": 2225, "num_data_blocks": 823, "num_entries": 5380, "num_filter_entries": 5380, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769421898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.151275) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12101408 bytes
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.152690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 272.6 rd, 209.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 14.3 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(35.1) write-amplify(15.3) OK, records in: 5869, records dropped: 489 output_compression: NoCompression
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.152706) EVENT_LOG_v1 {"time_micros": 1769421898152698, "job": 24, "event": "compaction_finished", "compaction_time_micros": 57819, "compaction_time_cpu_micros": 26824, "output_level": 6, "num_output_files": 1, "total_output_size": 12101408, "num_input_records": 5869, "num_output_records": 5380, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421898152955, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421898155469, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.093044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.155541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.155546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.155548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.155549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:04:58 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:04:58.155551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:04:58 compute-0 ceph-mon[74456]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:04:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/192998050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:04:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/192998050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:04:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:58 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:04:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:59 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:04:59 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b4003040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:04:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:04:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:04:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:04:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:05:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:00 compute-0 ceph-mon[74456]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 248 B/s rd, 0 op/s
Jan 26 10:05:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:00 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c40095a0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 497 B/s rd, 0 op/s
Jan 26 10:05:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:01 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:01 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b0001090 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:01.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:01 compute-0 sudo[258413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:05:01 compute-0 sudo[258413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:01 compute-0 sudo[258413]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:02.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:02 compute-0 ceph-mon[74456]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 497 B/s rd, 0 op/s
Jan 26 10:05:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:02 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:03 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c4009720 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:03 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:03.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:05:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:04.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:04 compute-0 podman[258440]: 2026-01-26 10:05:04.149337161 +0000 UTC m=+0.079977677 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:05:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:04 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b0001b90 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:04 compute-0 ceph-mon[74456]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:05 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:05 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c400a040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:06.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:06] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:05:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:06] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:05:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:06 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:06 compute-0 ceph-mon[74456]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:07.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:05:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:07 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b0001b90 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:07 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:05:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:07.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:05:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:08 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c400a040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:08 compute-0 ceph-mon[74456]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:09 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:09 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b0001b90 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:09.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:09 compute-0 ceph-mon[74456]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:10.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:10 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 10:05:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:11 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c400a040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:11 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c400a040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:05:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:12.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:05:12 compute-0 sudo[258467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:05:12 compute-0 sudo[258467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:12 compute-0 sudo[258467]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:12 compute-0 sudo[258494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:05:12 compute-0 sudo[258494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:12 compute-0 ceph-mon[74456]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 10:05:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:12 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b0003020 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:13 compute-0 sudo[258494]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 10:05:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 10:05:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 10:05:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:13 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:13 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:14.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:14 compute-0 ceph-mon[74456]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:05:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:05:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:14 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c400a040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:05:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:05:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 10:05:15 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 10:05:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:15 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b0003020 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:15 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:05:16 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:05:16 compute-0 sudo[258553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:05:16 compute-0 sudo[258553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:16 compute-0 sudo[258553]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:16.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:16 compute-0 sudo[258578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:05:16 compute-0 sudo[258578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:16 compute-0 podman[258645]: 2026-01-26 10:05:16.636638144 +0000 UTC m=+0.053049822 container create 93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 10:05:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:16] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:05:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:16] "GET /metrics HTTP/1.1" 200 48350 "" "Prometheus/2.51.0"
Jan 26 10:05:16 compute-0 systemd[1]: Started libpod-conmon-93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7.scope.
Jan 26 10:05:16 compute-0 podman[258645]: 2026-01-26 10:05:16.614035075 +0000 UTC m=+0.030446733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:05:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:05:16 compute-0 podman[258645]: 2026-01-26 10:05:16.73972556 +0000 UTC m=+0.156137288 container init 93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_tu, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:05:16 compute-0 podman[258645]: 2026-01-26 10:05:16.753229527 +0000 UTC m=+0.169641215 container start 93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:05:16 compute-0 podman[258645]: 2026-01-26 10:05:16.757395209 +0000 UTC m=+0.173806887 container attach 93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_tu, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:05:16 compute-0 jolly_tu[258661]: 167 167
Jan 26 10:05:16 compute-0 systemd[1]: libpod-93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7.scope: Deactivated successfully.
Jan 26 10:05:16 compute-0 conmon[258661]: conmon 93a3708c448e8921e13a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7.scope/container/memory.events
Jan 26 10:05:16 compute-0 podman[258645]: 2026-01-26 10:05:16.759706091 +0000 UTC m=+0.176117739 container died 93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-df93a4c72d4bb5dfe196c3d04896d301e34c7b882aa1463ddc9aec007e42de1b-merged.mount: Deactivated successfully.
Jan 26 10:05:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:16 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:16 compute-0 podman[258645]: 2026-01-26 10:05:16.798554919 +0000 UTC m=+0.214966567 container remove 93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Jan 26 10:05:16 compute-0 systemd[1]: libpod-conmon-93a3708c448e8921e13a48a78fa44c0fe3800458c87740d7761d6cd7e964f0b7.scope: Deactivated successfully.
Jan 26 10:05:16 compute-0 ceph-mon[74456]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:05:16 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:05:16 compute-0 podman[258686]: 2026-01-26 10:05:16.97939465 +0000 UTC m=+0.047126001 container create 1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 10:05:17 compute-0 systemd[1]: Started libpod-conmon-1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf.scope.
Jan 26 10:05:17 compute-0 podman[258686]: 2026-01-26 10:05:16.955603805 +0000 UTC m=+0.023335166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:05:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bed1866e060bad76418108d6aa5531f4f0e17b8ded43d03664435c9f98472e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bed1866e060bad76418108d6aa5531f4f0e17b8ded43d03664435c9f98472e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bed1866e060bad76418108d6aa5531f4f0e17b8ded43d03664435c9f98472e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bed1866e060bad76418108d6aa5531f4f0e17b8ded43d03664435c9f98472e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bed1866e060bad76418108d6aa5531f4f0e17b8ded43d03664435c9f98472e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:17 compute-0 podman[258686]: 2026-01-26 10:05:17.082912536 +0000 UTC m=+0.150643887 container init 1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_banzai, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 10:05:17 compute-0 podman[258686]: 2026-01-26 10:05:17.094579573 +0000 UTC m=+0.162310924 container start 1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 10:05:17 compute-0 podman[258686]: 2026-01-26 10:05:17.098449578 +0000 UTC m=+0.166180929 container attach 1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_banzai, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:05:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:17.100Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:05:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:17.100Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:05:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:17 compute-0 mystifying_banzai[258703]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:05:17 compute-0 mystifying_banzai[258703]: --> All data devices are unavailable
Jan 26 10:05:17 compute-0 systemd[1]: libpod-1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf.scope: Deactivated successfully.
Jan 26 10:05:17 compute-0 podman[258686]: 2026-01-26 10:05:17.528405399 +0000 UTC m=+0.596136800 container died 1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 10:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-32bed1866e060bad76418108d6aa5531f4f0e17b8ded43d03664435c9f98472e-merged.mount: Deactivated successfully.
Jan 26 10:05:17 compute-0 podman[258686]: 2026-01-26 10:05:17.573935665 +0000 UTC m=+0.641666996 container remove 1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_banzai, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:05:17 compute-0 systemd[1]: libpod-conmon-1d9c7fc7045c3a98b80d12e17242a85f662d90766cb4de56e75c77899085dedf.scope: Deactivated successfully.
Jan 26 10:05:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c400a040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:17 compute-0 sudo[258578]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:17 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4698001090 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:17 compute-0 sudo[258732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:05:17 compute-0 sudo[258732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:17 compute-0 sudo[258732]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:17.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:17 compute-0 sudo[258757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:05:17 compute-0 sudo[258757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:17 compute-0 ceph-mon[74456]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:18 compute-0 podman[258823]: 2026-01-26 10:05:18.200273761 +0000 UTC m=+0.041311254 container create 322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:05:18 compute-0 systemd[1]: Started libpod-conmon-322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9.scope.
Jan 26 10:05:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:05:18 compute-0 podman[258823]: 2026-01-26 10:05:18.181499767 +0000 UTC m=+0.022537240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:05:18 compute-0 podman[258823]: 2026-01-26 10:05:18.278860165 +0000 UTC m=+0.119897658 container init 322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:05:18 compute-0 podman[258823]: 2026-01-26 10:05:18.291162577 +0000 UTC m=+0.132200020 container start 322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 26 10:05:18 compute-0 podman[258823]: 2026-01-26 10:05:18.294984002 +0000 UTC m=+0.136021465 container attach 322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 10:05:18 compute-0 sad_kalam[258839]: 167 167
Jan 26 10:05:18 compute-0 systemd[1]: libpod-322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9.scope: Deactivated successfully.
Jan 26 10:05:18 compute-0 conmon[258839]: conmon 322dd06b2392eec2f952 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9.scope/container/memory.events
Jan 26 10:05:18 compute-0 podman[258823]: 2026-01-26 10:05:18.297797074 +0000 UTC m=+0.138834527 container died 322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 10:05:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-16a5791f6a39e8fbf4652df200588075b8a46e1b0a757ab83d9882d0ffb3d202-merged.mount: Deactivated successfully.
Jan 26 10:05:18 compute-0 podman[258823]: 2026-01-26 10:05:18.347644144 +0000 UTC m=+0.188681597 container remove 322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:05:18 compute-0 systemd[1]: libpod-conmon-322dd06b2392eec2f9522062732960789800329252f16c8a034160048f6d3bf9.scope: Deactivated successfully.
Jan 26 10:05:18 compute-0 podman[258865]: 2026-01-26 10:05:18.514604069 +0000 UTC m=+0.046459126 container create cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:05:18 compute-0 systemd[1]: Started libpod-conmon-cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1.scope.
Jan 26 10:05:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4627810a5e189f2a1e3606faf98ba8227364cc15e3b60851d715a5d3e537a129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4627810a5e189f2a1e3606faf98ba8227364cc15e3b60851d715a5d3e537a129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4627810a5e189f2a1e3606faf98ba8227364cc15e3b60851d715a5d3e537a129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4627810a5e189f2a1e3606faf98ba8227364cc15e3b60851d715a5d3e537a129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:18 compute-0 podman[258865]: 2026-01-26 10:05:18.496305135 +0000 UTC m=+0.028160222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:05:18 compute-0 podman[258865]: 2026-01-26 10:05:18.602395567 +0000 UTC m=+0.134250704 container init cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 10:05:18 compute-0 podman[258865]: 2026-01-26 10:05:18.608131294 +0000 UTC m=+0.139986381 container start cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 10:05:18 compute-0 podman[258865]: 2026-01-26 10:05:18.612096441 +0000 UTC m=+0.143951538 container attach cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:05:18
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', 'images', 'default.rgw.control', 'backups', '.nfs', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:05:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:05:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:18 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:05:18 compute-0 practical_galois[258881]: {
Jan 26 10:05:18 compute-0 practical_galois[258881]:     "0": [
Jan 26 10:05:18 compute-0 practical_galois[258881]:         {
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "devices": [
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "/dev/loop3"
Jan 26 10:05:18 compute-0 practical_galois[258881]:             ],
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "lv_name": "ceph_lv0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "lv_size": "21470642176",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "name": "ceph_lv0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "tags": {
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.cluster_name": "ceph",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.crush_device_class": "",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.encrypted": "0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.osd_id": "0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.type": "block",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.vdo": "0",
Jan 26 10:05:18 compute-0 practical_galois[258881]:                 "ceph.with_tpm": "0"
Jan 26 10:05:18 compute-0 practical_galois[258881]:             },
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "type": "block",
Jan 26 10:05:18 compute-0 practical_galois[258881]:             "vg_name": "ceph_vg0"
Jan 26 10:05:18 compute-0 practical_galois[258881]:         }
Jan 26 10:05:18 compute-0 practical_galois[258881]:     ]
Jan 26 10:05:18 compute-0 practical_galois[258881]: }
Jan 26 10:05:18 compute-0 systemd[1]: libpod-cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1.scope: Deactivated successfully.
Jan 26 10:05:18 compute-0 podman[258865]: 2026-01-26 10:05:18.936447962 +0000 UTC m=+0.468303029 container died cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_galois, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:05:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4627810a5e189f2a1e3606faf98ba8227364cc15e3b60851d715a5d3e537a129-merged.mount: Deactivated successfully.
Jan 26 10:05:18 compute-0 podman[258865]: 2026-01-26 10:05:18.983842728 +0000 UTC m=+0.515697785 container remove cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_galois, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 systemd[1]: libpod-conmon-cbc186c0e1935c9bc565f890a0444eed3bc819d0e20f971345b368fb0cf848e1.scope: Deactivated successfully.
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:05:18 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:05:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:05:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:05:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:05:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:05:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:05:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:05:19 compute-0 sudo[258757]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:19 compute-0 sudo[258901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:05:19 compute-0 sudo[258901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:19 compute-0 sudo[258901]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:19 compute-0 sudo[258926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:05:19 compute-0 sudo[258926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:19 compute-0 podman[258993]: 2026-01-26 10:05:19.521971337 +0000 UTC m=+0.038858649 container create 6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:05:19 compute-0 systemd[1]: Started libpod-conmon-6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4.scope.
Jan 26 10:05:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:05:19 compute-0 podman[258993]: 2026-01-26 10:05:19.60277166 +0000 UTC m=+0.119659072 container init 6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Jan 26 10:05:19 compute-0 podman[258993]: 2026-01-26 10:05:19.506636668 +0000 UTC m=+0.023524000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:05:19 compute-0 podman[258993]: 2026-01-26 10:05:19.608472976 +0000 UTC m=+0.125360288 container start 6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:05:19 compute-0 podman[258993]: 2026-01-26 10:05:19.611666406 +0000 UTC m=+0.128553748 container attach 6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 10:05:19 compute-0 confident_almeida[259011]: 167 167
Jan 26 10:05:19 compute-0 systemd[1]: libpod-6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4.scope: Deactivated successfully.
Jan 26 10:05:19 compute-0 podman[258993]: 2026-01-26 10:05:19.615696316 +0000 UTC m=+0.132583658 container died 6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:05:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:19 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb4fc3cc85b0c30ae78ad111ce9ffff8d986fe53d8cc4c5db73266671e4aa72c-merged.mount: Deactivated successfully.
Jan 26 10:05:19 compute-0 podman[258993]: 2026-01-26 10:05:19.666988597 +0000 UTC m=+0.183875949 container remove 6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:05:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:19 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46c400a040 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:19 compute-0 systemd[1]: libpod-conmon-6720b56181de0a235bc589c5196d9b24cd6a38b73c80ec6ce0a7f76c09a6b2b4.scope: Deactivated successfully.
Jan 26 10:05:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:19.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:19 compute-0 podman[259035]: 2026-01-26 10:05:19.839477766 +0000 UTC m=+0.042668484 container create ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_rubin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:05:19 compute-0 systemd[1]: Started libpod-conmon-ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af.scope.
Jan 26 10:05:19 compute-0 podman[259035]: 2026-01-26 10:05:19.819946714 +0000 UTC m=+0.023137452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:05:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96995d48d279b81bfe7b5c364b3dc414d80363559ddb30ac650194347ab4c7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96995d48d279b81bfe7b5c364b3dc414d80363559ddb30ac650194347ab4c7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96995d48d279b81bfe7b5c364b3dc414d80363559ddb30ac650194347ab4c7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96995d48d279b81bfe7b5c364b3dc414d80363559ddb30ac650194347ab4c7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:19 compute-0 podman[259035]: 2026-01-26 10:05:19.937918278 +0000 UTC m=+0.141109036 container init ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_rubin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:05:19 compute-0 podman[259035]: 2026-01-26 10:05:19.951080849 +0000 UTC m=+0.154271557 container start ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_rubin, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:05:19 compute-0 podman[259035]: 2026-01-26 10:05:19.955078647 +0000 UTC m=+0.158269375 container attach ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:05:19 compute-0 ceph-mon[74456]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:20 compute-0 sshd-session[258978]: Invalid user postgres from 157.245.76.178 port 48378
Jan 26 10:05:20 compute-0 sshd-session[258978]: Connection closed by invalid user postgres 157.245.76.178 port 48378 [preauth]
Jan 26 10:05:20 compute-0 lvm[259139]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:05:20 compute-0 lvm[259139]: VG ceph_vg0 finished
Jan 26 10:05:20 compute-0 ecstatic_rubin[259051]: {}
Jan 26 10:05:20 compute-0 systemd[1]: libpod-ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af.scope: Deactivated successfully.
Jan 26 10:05:20 compute-0 systemd[1]: libpod-ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af.scope: Consumed 1.161s CPU time.
Jan 26 10:05:20 compute-0 podman[259035]: 2026-01-26 10:05:20.705077023 +0000 UTC m=+0.908267731 container died ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:05:20 compute-0 podman[259126]: 2026-01-26 10:05:20.719080322 +0000 UTC m=+0.111142594 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 10:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b96995d48d279b81bfe7b5c364b3dc414d80363559ddb30ac650194347ab4c7a-merged.mount: Deactivated successfully.
Jan 26 10:05:20 compute-0 podman[259035]: 2026-01-26 10:05:20.744277598 +0000 UTC m=+0.947468296 container remove ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_rubin, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:05:20 compute-0 systemd[1]: libpod-conmon-ee4a8b908fce2c26a0b8b14dbfe221151c3e681e3b215a2b40764da01be546af.scope: Deactivated successfully.
Jan 26 10:05:20 compute-0 sudo[258926]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:20 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4698001090 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:05:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:05:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:20 compute-0 sudo[259171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:05:20 compute-0 sudo[259171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:20 compute-0 sudo[259171]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 10:05:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:21 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:21 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:21.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:05:21 compute-0 sudo[259196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:05:21 compute-0 sudo[259196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:21 compute-0 sudo[259196]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:05:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:22.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:05:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:22 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46ac003310 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:22 compute-0 ceph-mon[74456]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 26 10:05:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4698001090 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:23 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:23 compute-0 ceph-mon[74456]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:05:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:24.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:05:24 compute-0 kernel: ganesha.nfsd[258288]: segfault at 50 ip 00007f474519832e sp 00007f46bfffe210 error 4 in libntirpc.so.5.8[7f474517d000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 26 10:05:24 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 10:05:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[258137]: 26/01/2026 10:05:24 : epoch 69773c21 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f46b80034e0 fd 46 proxy ignored for local
Jan 26 10:05:24 compute-0 systemd[1]: Started Process Core Dump (PID 259225/UID 0).
Jan 26 10:05:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1712979899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100525 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:05:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:26 compute-0 systemd-coredump[259226]: Process 258141 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 42:
                                                    #0  0x00007f474519832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 10:05:26 compute-0 systemd[1]: systemd-coredump@13-259225-0.service: Deactivated successfully.
Jan 26 10:05:26 compute-0 systemd[1]: systemd-coredump@13-259225-0.service: Consumed 1.266s CPU time.
Jan 26 10:05:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:05:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:26.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:05:26 compute-0 podman[259231]: 2026-01-26 10:05:26.208979049 +0000 UTC m=+0.026238240 container died b03a590a3bc4e487fb372fd232da334e5f31dbcfccc00b452bf54b70d9b53e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 10:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dbc2d13f1622129d0d5d85352b84c11bbab55ea2b05da4177347869fced2dd9-merged.mount: Deactivated successfully.
Jan 26 10:05:26 compute-0 podman[259231]: 2026-01-26 10:05:26.244061044 +0000 UTC m=+0.061320225 container remove b03a590a3bc4e487fb372fd232da334e5f31dbcfccc00b452bf54b70d9b53e38 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 10:05:26 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 10:05:26 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 10:05:26 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.593s CPU time.
Jan 26 10:05:26 compute-0 ceph-mon[74456]: pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1779943853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4109204512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:26] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 26 10:05:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:26] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Jan 26 10:05:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:27.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:05:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1283743844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:28 compute-0 ceph-mon[74456]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:29.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.925 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.925 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.944 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.944 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.944 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.971 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.971 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.971 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.972 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.972 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.972 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.972 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.972 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:05:29 compute-0 nova_compute[254880]: 2026-01-26 10:05:29.972 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.008 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.009 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.010 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.010 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.010 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:05:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:30.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:05:30 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2084782635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.448 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:05:30 compute-0 ceph-mon[74456]: pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:05:30 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2084782635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.646 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.648 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4906MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.649 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.649 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.738 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.738 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:05:30 compute-0 nova_compute[254880]: 2026-01-26 10:05:30.756 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:05:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100530 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:05:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:05:31 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566203685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:31 compute-0 nova_compute[254880]: 2026-01-26 10:05:31.227 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:05:31 compute-0 nova_compute[254880]: 2026-01-26 10:05:31.236 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:05:31 compute-0 nova_compute[254880]: 2026-01-26 10:05:31.263 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:05:31 compute-0 nova_compute[254880]: 2026-01-26 10:05:31.265 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:05:31 compute-0 nova_compute[254880]: 2026-01-26 10:05:31.266 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:05:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:05:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3566203685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:05:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:31.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:32.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:32 compute-0 ceph-mon[74456]: pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:05:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 10:05:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:05:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:33.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000021s ======
Jan 26 10:05:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:34.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 26 10:05:34 compute-0 ceph-mon[74456]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 26 10:05:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:35 compute-0 podman[259328]: 2026-01-26 10:05:35.142049502 +0000 UTC m=+0.076744285 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 10:05:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:05:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:35.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:35 compute-0 ceph-mon[74456]: pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:05:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:05:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:05:36 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 14.
Jan 26 10:05:36 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:05:36 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 1.593s CPU time.
Jan 26 10:05:36 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 10:05:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:36] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 26 10:05:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:36] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 26 10:05:36 compute-0 podman[259400]: 2026-01-26 10:05:36.766684915 +0000 UTC m=+0.029733357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:05:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=404 latency=0.003000064s ======
Jan 26 10:05:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:36.973 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.003000064s
Jan 26 10:05:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - - [26/Jan/2026:10:05:36.992 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 26 10:05:37 compute-0 podman[259400]: 2026-01-26 10:05:37.01103612 +0000 UTC m=+0.274084542 container create a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 10:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b91c4f93b2615a94999620bcba1571d10e7bca37e9b3445451c64042770ccc8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b91c4f93b2615a94999620bcba1571d10e7bca37e9b3445451c64042770ccc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b91c4f93b2615a94999620bcba1571d10e7bca37e9b3445451c64042770ccc8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b91c4f93b2615a94999620bcba1571d10e7bca37e9b3445451c64042770ccc8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:05:37 compute-0 podman[259400]: 2026-01-26 10:05:37.062881064 +0000 UTC m=+0.325929506 container init a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 10:05:37 compute-0 podman[259400]: 2026-01-26 10:05:37.067961006 +0000 UTC m=+0.331009418 container start a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 10:05:37 compute-0 bash[259400]: a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 10:05:37 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:37.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 10:05:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:05:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:05:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000016s ======
Jan 26 10:05:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 26 10:05:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:38 compute-0 ceph-mon[74456]: pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:05:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:05:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:39.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:40 compute-0 ceph-mon[74456]: pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 26 10:05:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 10:05:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 26 10:05:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:41.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:42 compute-0 sudo[259461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:05:42 compute-0 sudo[259461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:05:42 compute-0 sudo[259461]: pam_unix(sudo:session): session closed for user root
Jan 26 10:05:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 26 10:05:42 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 26 10:05:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 26 10:05:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 26 10:05:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:05:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:05:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:05:43 compute-0 ceph-mon[74456]: pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Jan 26 10:05:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 921 B/s wr, 2 op/s
Jan 26 10:05:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000004s ======
Jan 26 10:05:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:43.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000004s
Jan 26 10:05:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 26 10:05:44 compute-0 ceph-mon[74456]: osdmap e144: 3 total, 3 up, 3 in
Jan 26 10:05:44 compute-0 ceph-mon[74456]: pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 921 B/s wr, 2 op/s
Jan 26 10:05:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 26 10:05:44 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 26 10:05:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:05:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:05:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:05:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 26 10:05:45 compute-0 ceph-mon[74456]: osdmap e145: 3 total, 3 up, 3 in
Jan 26 10:05:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 26 10:05:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 28 op/s
Jan 26 10:05:45 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 26 10:05:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:45.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:46.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:46 compute-0 ceph-mon[74456]: pgmap v701: 353 pgs: 353 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 28 op/s
Jan 26 10:05:46 compute-0 ceph-mon[74456]: osdmap e146: 3 total, 3 up, 3 in
Jan 26 10:05:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:46] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 26 10:05:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:46] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Jan 26 10:05:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:47.106Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:05:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:47.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:05:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.7 MiB/s wr, 24 op/s
Jan 26 10:05:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 26 10:05:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 26 10:05:47 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 26 10:05:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100547 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:05:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000003s ======
Jan 26 10:05:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:47.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000003s
Jan 26 10:05:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 26 10:05:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 26 10:05:48 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 26 10:05:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000002s ======
Jan 26 10:05:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:48.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000002s
Jan 26 10:05:48 compute-0 ceph-mon[74456]: pgmap v702: 353 pgs: 353 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.7 MiB/s wr, 24 op/s
Jan 26 10:05:48 compute-0 ceph-mon[74456]: osdmap e147: 3 total, 3 up, 3 in
Jan 26 10:05:48 compute-0 ceph-mon[74456]: osdmap e148: 3 total, 3 up, 3 in
Jan 26 10:05:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:05:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:05:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:05:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:05:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:05:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:05:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:05:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 MiB/s wr, 29 op/s
Jan 26 10:05:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:49.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:50.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000023:nfs.cephfs.2: -2
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 26 10:05:50 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:05:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:51 compute-0 podman[259510]: 2026-01-26 10:05:51.158393523 +0000 UTC m=+0.090750162 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 10:05:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Jan 26 10:05:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb340016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:51 compute-0 ceph-mon[74456]: pgmap v705: 353 pgs: 353 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 MiB/s wr, 29 op/s
Jan 26 10:05:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000002s ======
Jan 26 10:05:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:51.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000002s
Jan 26 10:05:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:52.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:52 compute-0 ceph-mon[74456]: pgmap v706: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Jan 26 10:05:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100552 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:05:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 MiB/s wr, 34 op/s
Jan 26 10:05:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:53.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:54.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:05:54.690 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:05:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:05:54.691 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:05:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:05:54.691 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:05:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:54 compute-0 ceph-mon[74456]: pgmap v707: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 MiB/s wr, 34 op/s
Jan 26 10:05:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 MiB/s wr, 35 op/s
Jan 26 10:05:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb340016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:56.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:56] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 26 10:05:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:05:56] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 26 10:05:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:56 compute-0 ceph-mon[74456]: pgmap v708: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.1 MiB/s wr, 35 op/s
Jan 26 10:05:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:05:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:05:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.5 MiB/s wr, 28 op/s
Jan 26 10:05:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:57.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:05:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:05:58.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:05:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb340016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:58 compute-0 ceph-mon[74456]: pgmap v709: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.5 MiB/s wr, 28 op/s
Jan 26 10:05:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3630123591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:05:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3630123591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:05:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.2 MiB/s wr, 24 op/s
Jan 26 10:05:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:05:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:05:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:05:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:05:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:05:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:00.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:00 compute-0 ceph-mon[74456]: pgmap v710: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.2 MiB/s wr, 24 op/s
Jan 26 10:06:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Jan 26 10:06:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb340016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000005s ======
Jan 26 10:06:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:01.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000005s
Jan 26 10:06:02 compute-0 sudo[259548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:06:02 compute-0 sudo[259548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:02 compute-0 sudo[259548]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:02.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:02 compute-0 sshd-session[259573]: Invalid user postgres from 157.245.76.178 port 60198
Jan 26 10:06:02 compute-0 sshd-session[259573]: Connection closed by invalid user postgres 157.245.76.178 port 60198 [preauth]
Jan 26 10:06:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:02 compute-0 ceph-mon[74456]: pgmap v711: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Jan 26 10:06:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:06:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14000e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:06:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000002s ======
Jan 26 10:06:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:03.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000002s
Jan 26 10:06:03 compute-0 ceph-mon[74456]: pgmap v712: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:06:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:04.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:05 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:06:05.258 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:06:05 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:06:05.260 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:06:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:06:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:06 compute-0 podman[259579]: 2026-01-26 10:06:06.118047773 +0000 UTC m=+0.051323969 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 10:06:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:06.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:06] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:06] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:06 compute-0 ceph-mon[74456]: pgmap v713: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 26 10:06:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:06:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:07.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:06:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:07.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:08.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.343100) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421968343160, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 969, "num_deletes": 255, "total_data_size": 1585536, "memory_usage": 1619352, "flush_reason": "Manual Compaction"}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421968357532, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1564177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22935, "largest_seqno": 23902, "table_properties": {"data_size": 1559303, "index_size": 2398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10405, "raw_average_key_size": 19, "raw_value_size": 1549381, "raw_average_value_size": 2853, "num_data_blocks": 105, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769421898, "oldest_key_time": 1769421898, "file_creation_time": 1769421968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 14500 microseconds, and 7616 cpu microseconds.
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.357602) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1564177 bytes OK
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.357627) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.359239) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.359266) EVENT_LOG_v1 {"time_micros": 1769421968359256, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.359292) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1580958, prev total WAL file size 1580958, number of live WAL files 2.
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.360944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1527KB)], [50(11MB)]
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421968360998, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13665585, "oldest_snapshot_seqno": -1}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5391 keys, 13453953 bytes, temperature: kUnknown
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421968450453, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13453953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13417630, "index_size": 21749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 137552, "raw_average_key_size": 25, "raw_value_size": 13319865, "raw_average_value_size": 2470, "num_data_blocks": 888, "num_entries": 5391, "num_filter_entries": 5391, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769421968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.450713) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13453953 bytes
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.452518) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.6 rd, 150.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.5 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(17.3) write-amplify(8.6) OK, records in: 5923, records dropped: 532 output_compression: NoCompression
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.452541) EVENT_LOG_v1 {"time_micros": 1769421968452530, "job": 26, "event": "compaction_finished", "compaction_time_micros": 89540, "compaction_time_cpu_micros": 49545, "output_level": 6, "num_output_files": 1, "total_output_size": 13453953, "num_input_records": 5923, "num_output_records": 5391, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421968453161, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769421968456064, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.360724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.456309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.456320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.456324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.456327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:06:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:06:08.456330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:06:08 compute-0 ceph-mon[74456]: pgmap v714: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:09.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:10 compute-0 ceph-mon[74456]: pgmap v715: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:06:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:12 compute-0 ceph-mon[74456]: pgmap v716: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 26 10:06:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:12.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:06:12.262 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:06:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:13.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:14.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100614 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:06:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:14 compute-0 ceph-mon[74456]: pgmap v717: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:06:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:15.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:16 compute-0 ceph-mon[74456]: pgmap v718: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 26 10:06:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:16.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:16] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:16] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:17.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:06:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:18.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:06:18
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root', 'default.rgw.meta', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', '.nfs', 'volumes']
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:06:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:06:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:06:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:06:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:06:19 compute-0 ceph-mon[74456]: pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:19.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:20 compute-0 ceph-mon[74456]: pgmap v720: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:20.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c002e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:21 compute-0 sudo[259614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:06:21 compute-0 sudo[259614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:21 compute-0 sudo[259614]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:21 compute-0 sudo[259639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:06:21 compute-0 sudo[259639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:21 compute-0 podman[259663]: 2026-01-26 10:06:21.362803769 +0000 UTC m=+0.145423328 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 10:06:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:21 compute-0 sudo[259639]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:21.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:06:21 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:06:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:06:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:06:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:06:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:06:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:06:22 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:06:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:06:22 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:06:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:06:22 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:06:22 compute-0 sudo[259720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:06:22 compute-0 sudo[259720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:22 compute-0 sudo[259720]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:22 compute-0 sudo[259721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:06:22 compute-0 sudo[259721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:22 compute-0 sudo[259721]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:22 compute-0 sudo[259770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:06:22 compute-0 sudo[259770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:22.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:22 compute-0 podman[259834]: 2026-01-26 10:06:22.60263977 +0000 UTC m=+0.045807312 container create 1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 10:06:22 compute-0 ceph-mon[74456]: pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 26 10:06:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:06:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:06:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:06:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:06:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:06:22 compute-0 systemd[1]: Started libpod-conmon-1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13.scope.
Jan 26 10:06:22 compute-0 podman[259834]: 2026-01-26 10:06:22.579843847 +0000 UTC m=+0.023011379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:06:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:06:22 compute-0 podman[259834]: 2026-01-26 10:06:22.704308627 +0000 UTC m=+0.147476169 container init 1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:06:22 compute-0 podman[259834]: 2026-01-26 10:06:22.717239802 +0000 UTC m=+0.160407314 container start 1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:06:22 compute-0 podman[259834]: 2026-01-26 10:06:22.721010337 +0000 UTC m=+0.164177879 container attach 1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:06:22 compute-0 unruffled_lederberg[259851]: 167 167
Jan 26 10:06:22 compute-0 systemd[1]: libpod-1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13.scope: Deactivated successfully.
Jan 26 10:06:22 compute-0 podman[259834]: 2026-01-26 10:06:22.72866934 +0000 UTC m=+0.171836892 container died 1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 10:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4cb6450da86150ebcdb918946cfbaba139414d7b4bfeef30fa9e224e98b0049-merged.mount: Deactivated successfully.
Jan 26 10:06:22 compute-0 podman[259834]: 2026-01-26 10:06:22.785272524 +0000 UTC m=+0.228440036 container remove 1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 10:06:22 compute-0 systemd[1]: libpod-conmon-1eabef20e7e98e016506f81a46f874f263ec3e4b073e264d120721966902fe13.scope: Deactivated successfully.
Jan 26 10:06:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:22 compute-0 podman[259876]: 2026-01-26 10:06:22.9747944 +0000 UTC m=+0.048919232 container create 69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:06:23 compute-0 systemd[1]: Started libpod-conmon-69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1.scope.
Jan 26 10:06:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:06:23 compute-0 podman[259876]: 2026-01-26 10:06:22.952767007 +0000 UTC m=+0.026891859 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd7a82ae836ab0312db6367c02a4c45c20319ff175a640c9902215db3aebf1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd7a82ae836ab0312db6367c02a4c45c20319ff175a640c9902215db3aebf1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd7a82ae836ab0312db6367c02a4c45c20319ff175a640c9902215db3aebf1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd7a82ae836ab0312db6367c02a4c45c20319ff175a640c9902215db3aebf1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd7a82ae836ab0312db6367c02a4c45c20319ff175a640c9902215db3aebf1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:23 compute-0 podman[259876]: 2026-01-26 10:06:23.063294415 +0000 UTC m=+0.137419267 container init 69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:06:23 compute-0 podman[259876]: 2026-01-26 10:06:23.070538418 +0000 UTC m=+0.144663250 container start 69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_burnell, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:06:23 compute-0 podman[259876]: 2026-01-26 10:06:23.073844791 +0000 UTC m=+0.147969663 container attach 69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:06:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:23 compute-0 unruffled_burnell[259892]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:06:23 compute-0 unruffled_burnell[259892]: --> All data devices are unavailable
Jan 26 10:06:23 compute-0 systemd[1]: libpod-69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1.scope: Deactivated successfully.
Jan 26 10:06:23 compute-0 podman[259907]: 2026-01-26 10:06:23.437449576 +0000 UTC m=+0.022968249 container died 69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cd7a82ae836ab0312db6367c02a4c45c20319ff175a640c9902215db3aebf1f-merged.mount: Deactivated successfully.
Jan 26 10:06:23 compute-0 podman[259907]: 2026-01-26 10:06:23.474707902 +0000 UTC m=+0.060226545 container remove 69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 10:06:23 compute-0 systemd[1]: libpod-conmon-69b6d7d3a531064aa02f3f8b6ad5aa6dcca6f186be671637fb5fbc41f0346dd1.scope: Deactivated successfully.
Jan 26 10:06:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:06:23 compute-0 sudo[259770]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:06:23 compute-0 sudo[259923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:06:23 compute-0 sudo[259923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:23 compute-0 sudo[259923]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:23 compute-0 sudo[259948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:06:23 compute-0 sudo[259948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:06:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:06:24 compute-0 podman[260013]: 2026-01-26 10:06:24.091600047 +0000 UTC m=+0.044647834 container create 0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:06:24 compute-0 systemd[1]: Started libpod-conmon-0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e.scope.
Jan 26 10:06:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:06:24 compute-0 podman[260013]: 2026-01-26 10:06:24.070987209 +0000 UTC m=+0.024035026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:06:24 compute-0 podman[260013]: 2026-01-26 10:06:24.171108257 +0000 UTC m=+0.124156074 container init 0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 10:06:24 compute-0 podman[260013]: 2026-01-26 10:06:24.177765534 +0000 UTC m=+0.130813311 container start 0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 10:06:24 compute-0 podman[260013]: 2026-01-26 10:06:24.181443827 +0000 UTC m=+0.134491644 container attach 0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:06:24 compute-0 stoic_varahamihira[260030]: 167 167
Jan 26 10:06:24 compute-0 systemd[1]: libpod-0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e.scope: Deactivated successfully.
Jan 26 10:06:24 compute-0 podman[260013]: 2026-01-26 10:06:24.186693939 +0000 UTC m=+0.139741736 container died 0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_varahamihira, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-827283beef841625b7e583796b656836e7f54dd6209691896d92e74ffa2ecc79-merged.mount: Deactivated successfully.
Jan 26 10:06:24 compute-0 podman[260013]: 2026-01-26 10:06:24.229062184 +0000 UTC m=+0.182109981 container remove 0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:06:24 compute-0 systemd[1]: libpod-conmon-0f95f9f8f86faab1f0a7084ce46c226629c6b6cef5e2e5f851b97a6399216a2e.scope: Deactivated successfully.
Jan 26 10:06:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:24.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:24 compute-0 podman[260054]: 2026-01-26 10:06:24.41220382 +0000 UTC m=+0.042422207 container create a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 10:06:24 compute-0 systemd[1]: Started libpod-conmon-a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87.scope.
Jan 26 10:06:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:06:24 compute-0 podman[260054]: 2026-01-26 10:06:24.395365047 +0000 UTC m=+0.025583454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a011cbb0e16c7df9822acb82d56bcb361753a5a3e6733d5c20a9a679fbdc688a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a011cbb0e16c7df9822acb82d56bcb361753a5a3e6733d5c20a9a679fbdc688a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a011cbb0e16c7df9822acb82d56bcb361753a5a3e6733d5c20a9a679fbdc688a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a011cbb0e16c7df9822acb82d56bcb361753a5a3e6733d5c20a9a679fbdc688a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:24 compute-0 podman[260054]: 2026-01-26 10:06:24.50323779 +0000 UTC m=+0.133456177 container init a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:06:24 compute-0 podman[260054]: 2026-01-26 10:06:24.512519644 +0000 UTC m=+0.142738031 container start a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 10:06:24 compute-0 podman[260054]: 2026-01-26 10:06:24.515871078 +0000 UTC m=+0.146089485 container attach a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:06:24 compute-0 ceph-mon[74456]: pgmap v722: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]: {
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:     "0": [
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:         {
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "devices": [
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "/dev/loop3"
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             ],
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "lv_name": "ceph_lv0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "lv_size": "21470642176",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "name": "ceph_lv0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "tags": {
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.cluster_name": "ceph",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.crush_device_class": "",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.encrypted": "0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.osd_id": "0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.type": "block",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.vdo": "0",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:                 "ceph.with_tpm": "0"
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             },
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "type": "block",
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:             "vg_name": "ceph_vg0"
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:         }
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]:     ]
Jan 26 10:06:24 compute-0 sleepy_shirley[260072]: }
Jan 26 10:06:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:24 compute-0 systemd[1]: libpod-a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87.scope: Deactivated successfully.
Jan 26 10:06:24 compute-0 podman[260081]: 2026-01-26 10:06:24.873499211 +0000 UTC m=+0.025348598 container died a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 10:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a011cbb0e16c7df9822acb82d56bcb361753a5a3e6733d5c20a9a679fbdc688a-merged.mount: Deactivated successfully.
Jan 26 10:06:24 compute-0 podman[260081]: 2026-01-26 10:06:24.911442676 +0000 UTC m=+0.063292043 container remove a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:06:24 compute-0 systemd[1]: libpod-conmon-a7e8c450775bbe9130d4ca0fdd918cc3c23e771a2624d02899e48674853e7d87.scope: Deactivated successfully.
Jan 26 10:06:24 compute-0 sudo[259948]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:24 compute-0 nova_compute[254880]: 2026-01-26 10:06:24.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:24 compute-0 nova_compute[254880]: 2026-01-26 10:06:24.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 10:06:24 compute-0 nova_compute[254880]: 2026-01-26 10:06:24.976 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 10:06:24 compute-0 nova_compute[254880]: 2026-01-26 10:06:24.977 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:24 compute-0 nova_compute[254880]: 2026-01-26 10:06:24.978 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 10:06:25 compute-0 nova_compute[254880]: 2026-01-26 10:06:25.011 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:25 compute-0 sudo[260097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:06:25 compute-0 sudo[260097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:25 compute-0 sudo[260097]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:25 compute-0 sudo[260122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:06:25 compute-0 sudo[260122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:06:25 compute-0 podman[260188]: 2026-01-26 10:06:25.548294482 +0000 UTC m=+0.051152867 container create 29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 10:06:25 compute-0 systemd[1]: Started libpod-conmon-29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea.scope.
Jan 26 10:06:25 compute-0 podman[260188]: 2026-01-26 10:06:25.526055703 +0000 UTC m=+0.028914098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:06:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:06:25 compute-0 podman[260188]: 2026-01-26 10:06:25.646310717 +0000 UTC m=+0.149169112 container init 29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hermann, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:06:25 compute-0 podman[260188]: 2026-01-26 10:06:25.653379935 +0000 UTC m=+0.156238320 container start 29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:06:25 compute-0 goofy_hermann[260204]: 167 167
Jan 26 10:06:25 compute-0 podman[260188]: 2026-01-26 10:06:25.658500534 +0000 UTC m=+0.161358949 container attach 29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:06:25 compute-0 systemd[1]: libpod-29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea.scope: Deactivated successfully.
Jan 26 10:06:25 compute-0 podman[260188]: 2026-01-26 10:06:25.659562301 +0000 UTC m=+0.162420706 container died 29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 10:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-85469c9397387b5d20c98660607138728bdeb8022ac56d590f800c0612cd2e5f-merged.mount: Deactivated successfully.
Jan 26 10:06:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:25 compute-0 podman[260188]: 2026-01-26 10:06:25.697570897 +0000 UTC m=+0.200429282 container remove 29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hermann, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 26 10:06:25 compute-0 systemd[1]: libpod-conmon-29a747899199671e46c7af70e8cc602e13eb32c438f337488353a3dd46bf6bea.scope: Deactivated successfully.
Jan 26 10:06:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:25 compute-0 podman[260229]: 2026-01-26 10:06:25.885133364 +0000 UTC m=+0.048520572 container create f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:06:25 compute-0 systemd[1]: Started libpod-conmon-f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc.scope.
Jan 26 10:06:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff91624f3290d69be10182fe16259c7657692cdbab55106df03ab7484c1f31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff91624f3290d69be10182fe16259c7657692cdbab55106df03ab7484c1f31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff91624f3290d69be10182fe16259c7657692cdbab55106df03ab7484c1f31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dff91624f3290d69be10182fe16259c7657692cdbab55106df03ab7484c1f31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:06:25 compute-0 podman[260229]: 2026-01-26 10:06:25.865669814 +0000 UTC m=+0.029057042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:06:25 compute-0 podman[260229]: 2026-01-26 10:06:25.964454839 +0000 UTC m=+0.127842077 container init f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:06:25 compute-0 podman[260229]: 2026-01-26 10:06:25.974461691 +0000 UTC m=+0.137848899 container start f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 10:06:25 compute-0 podman[260229]: 2026-01-26 10:06:25.977976359 +0000 UTC m=+0.141363627 container attach f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:06:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:06:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:06:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:06:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:06:26 compute-0 lvm[260322]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:06:26 compute-0 lvm[260322]: VG ceph_vg0 finished
Jan 26 10:06:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:26] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 26 10:06:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:26] "GET /metrics HTTP/1.1" 200 48406 "" "Prometheus/2.51.0"
Jan 26 10:06:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:06:26 compute-0 objective_tesla[260245]: {}
Jan 26 10:06:26 compute-0 systemd[1]: libpod-f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc.scope: Deactivated successfully.
Jan 26 10:06:26 compute-0 systemd[1]: libpod-f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc.scope: Consumed 1.283s CPU time.
Jan 26 10:06:26 compute-0 conmon[260245]: conmon f2e2d6c9fc8dbaba2eef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc.scope/container/memory.events
Jan 26 10:06:26 compute-0 podman[260229]: 2026-01-26 10:06:26.704380798 +0000 UTC m=+0.867768026 container died f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:06:26 compute-0 ceph-mon[74456]: pgmap v723: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dff91624f3290d69be10182fe16259c7657692cdbab55106df03ab7484c1f31-merged.mount: Deactivated successfully.
Jan 26 10:06:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:26 compute-0 podman[260229]: 2026-01-26 10:06:26.849961879 +0000 UTC m=+1.013349097 container remove f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:06:26 compute-0 systemd[1]: libpod-conmon-f2e2d6c9fc8dbaba2eeffad255d310f25ed282357f2f5ad4a62139617b6125bc.scope: Deactivated successfully.
Jan 26 10:06:26 compute-0 sudo[260122]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:06:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:06:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:27 compute-0 nova_compute[254880]: 2026-01-26 10:06:27.021 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:27 compute-0 sudo[260339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:06:27 compute-0 sudo[260339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:27 compute-0 sudo[260339]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:27.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:06:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:06:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:27.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:27 compute-0 nova_compute[254880]: 2026-01-26 10:06:27.954 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:28 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:28 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:06:28 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1480749292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:28 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3328335903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:28 compute-0 ceph-mon[74456]: pgmap v724: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:06:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:28.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:29 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4136502083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:29 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/211135410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:06:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:29 compute-0 sshd-session[260096]: Connection reset by 205.210.31.250 port 58714 [preauth]
Jan 26 10:06:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:29.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:06:29 compute-0 nova_compute[254880]: 2026-01-26 10:06:29.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:29 compute-0 nova_compute[254880]: 2026-01-26 10:06:29.958 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:06:29 compute-0 nova_compute[254880]: 2026-01-26 10:06:29.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.028 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.028 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.028 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.028 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.053 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.054 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.054 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.054 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.054 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:06:30 compute-0 ceph-mon[74456]: pgmap v725: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 26 10:06:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:30.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:06:30 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629389423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.507 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.651 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.652 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4871MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.653 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.653 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.781 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.782 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:06:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.863 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing inventories for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.921 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating ProviderTree inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.921 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.940 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing aggregate associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.965 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing trait associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 10:06:30 compute-0 nova_compute[254880]: 2026-01-26 10:06:30.992 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:06:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3629389423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:06:31 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327039981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:06:31 compute-0 nova_compute[254880]: 2026-01-26 10:06:31.506 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:06:31 compute-0 nova_compute[254880]: 2026-01-26 10:06:31.511 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:06:31 compute-0 nova_compute[254880]: 2026-01-26 10:06:31.527 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:06:31 compute-0 nova_compute[254880]: 2026-01-26 10:06:31.528 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:06:31 compute-0 nova_compute[254880]: 2026-01-26 10:06:31.529 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:06:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:06:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:31.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:06:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3327039981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:32 compute-0 ceph-mon[74456]: pgmap v726: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:06:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3240052794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:06:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:32.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:32 compute-0 nova_compute[254880]: 2026-01-26 10:06:32.459 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:32 compute-0 nova_compute[254880]: 2026-01-26 10:06:32.460 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:32 compute-0 nova_compute[254880]: 2026-01-26 10:06:32.460 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:06:32 compute-0 nova_compute[254880]: 2026-01-26 10:06:32.460 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:06:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:06:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:06:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:33.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:34.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:34 compute-0 ceph-mon[74456]: pgmap v727: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Jan 26 10:06:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 11 op/s
Jan 26 10:06:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 26 10:06:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 26 10:06:35 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 26 10:06:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:36.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100636 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:06:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:36] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:36] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 26 10:06:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:36 compute-0 ceph-mon[74456]: pgmap v728: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 11 op/s
Jan 26 10:06:36 compute-0 ceph-mon[74456]: osdmap e149: 3 total, 3 up, 3 in
Jan 26 10:06:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 26 10:06:37 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 26 10:06:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:37.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:06:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:37.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:06:37 compute-0 podman[260419]: 2026-01-26 10:06:37.126879779 +0000 UTC m=+0.056975355 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:06:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 639 B/s wr, 13 op/s
Jan 26 10:06:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:37.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:38.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:38 compute-0 ceph-mon[74456]: osdmap e150: 3 total, 3 up, 3 in
Jan 26 10:06:38 compute-0 ceph-mon[74456]: pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 639 B/s wr, 13 op/s
Jan 26 10:06:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Jan 26 10:06:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:06:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:39.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:06:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:40.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:40 compute-0 ceph-mon[74456]: pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Jan 26 10:06:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4246575752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:06:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 26 10:06:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3882837298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:06:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:06:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:41.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:06:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:06:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:42.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:06:42 compute-0 sudo[260443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:06:42 compute-0 sudo[260443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:06:42 compute-0 sudo[260443]: pam_unix(sudo:session): session closed for user root
Jan 26 10:06:42 compute-0 ceph-mon[74456]: pgmap v733: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 26 10:06:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 26 10:06:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 26 10:06:43 compute-0 ceph-mon[74456]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 26 10:06:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Jan 26 10:06:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:43.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:44 compute-0 ceph-mon[74456]: osdmap e151: 3 total, 3 up, 3 in
Jan 26 10:06:44 compute-0 ceph-mon[74456]: pgmap v735: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Jan 26 10:06:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:45 compute-0 sshd-session[260473]: Invalid user postgres from 157.245.76.178 port 38548
Jan 26 10:06:45 compute-0 sshd-session[260473]: Connection closed by invalid user postgres 157.245.76.178 port 38548 [preauth]
Jan 26 10:06:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 127 op/s
Jan 26 10:06:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18000f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:06:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:06:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:46.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:46] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:46] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Jan 26 10:06:46 compute-0 ceph-mon[74456]: pgmap v736: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 127 op/s
Jan 26 10:06:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:47.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:06:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:47.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:06:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:47.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:06:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 26 10:06:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:06:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:47.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:06:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:48.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:06:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:48 compute-0 ceph-mon[74456]: pgmap v737: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 26 10:06:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:06:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:06:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:06:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:06:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:06:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:06:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:06:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 26 10:06:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100649 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:06:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:49.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:06:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:50.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:06:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003510 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:50 compute-0 ceph-mon[74456]: pgmap v738: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 26 10:06:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Jan 26 10:06:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb340026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:52 compute-0 podman[260481]: 2026-01-26 10:06:52.219036607 +0000 UTC m=+0.141318014 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:06:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:52.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:53 compute-0 ceph-mon[74456]: pgmap v739: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Jan 26 10:06:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 26 10:06:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:53.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:54 compute-0 ceph-mon[74456]: pgmap v740: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 26 10:06:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:54.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:06:54.691 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:06:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:06:54.691 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:06:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:06:54.691 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:06:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb340026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Jan 26 10:06:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:06:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:06:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:56] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:06:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:06:56] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:06:56 compute-0 ceph-mon[74456]: pgmap v741: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Jan 26 10:06:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:06:57.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:06:57 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 26 10:06:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 16 op/s
Jan 26 10:06:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb340026e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:57.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:06:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:06:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:06:58.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:06:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:59 compute-0 ceph-mon[74456]: pgmap v742: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 16 op/s
Jan 26 10:06:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/2156652848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:06:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/2156652848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:06:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:06:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 16 op/s
Jan 26 10:06:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:06:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:06:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:06:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:06:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:06:59.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:07:00 compute-0 ceph-mon[74456]: pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 16 op/s
Jan 26 10:07:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:00.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 629 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 26 10:07:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:01.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:07:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:07:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:07:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:02.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:07:02 compute-0 sudo[260519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:07:02 compute-0 sudo[260519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:02 compute-0 sudo[260519]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:02 compute-0 ceph-mon[74456]: pgmap v744: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 629 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 26 10:07:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:07:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:07:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:07:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:03.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:07:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100704 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:07:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:04 compute-0 ceph-mon[74456]: pgmap v745: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:07:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 10:07:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:05.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:06] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Jan 26 10:07:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:06] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Jan 26 10:07:06 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:06.648 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:07:06 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:06.649 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:07:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:06 compute-0 ceph-mon[74456]: pgmap v746: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 10:07:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:07.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:07:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 239 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 26 10:07:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:07:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:07.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:07:07 compute-0 ceph-mon[74456]: pgmap v747: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 239 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 26 10:07:08 compute-0 podman[260552]: 2026-01-26 10:07:08.118960547 +0000 UTC m=+0.048254548 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 10:07:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:08 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:08.651 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 239 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 26 10:07:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:07:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:09.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:07:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:07:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:07:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:10.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:10 compute-0 ceph-mon[74456]: pgmap v748: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 239 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 26 10:07:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:07:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000015s ======
Jan 26 10:07:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:11.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 26 10:07:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:12.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:12 compute-0 ceph-mon[74456]: pgmap v749: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:07:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:07:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 26 10:07:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:13.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:14.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:14 compute-0 ceph-mon[74456]: pgmap v750: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 26 10:07:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 26 10:07:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:15.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:16 compute-0 ceph-mon[74456]: pgmap v751: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.069132) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422036069208, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 860, "num_deletes": 251, "total_data_size": 1361482, "memory_usage": 1383424, "flush_reason": "Manual Compaction"}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422036079434, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1342026, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23903, "largest_seqno": 24762, "table_properties": {"data_size": 1337658, "index_size": 2020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9870, "raw_average_key_size": 19, "raw_value_size": 1328751, "raw_average_value_size": 2678, "num_data_blocks": 89, "num_entries": 496, "num_filter_entries": 496, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769421968, "oldest_key_time": 1769421968, "file_creation_time": 1769422036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 10347 microseconds, and 4610 cpu microseconds.
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.079501) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1342026 bytes OK
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.079517) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.080837) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.080848) EVENT_LOG_v1 {"time_micros": 1769422036080844, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.080864) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1357335, prev total WAL file size 1357335, number of live WAL files 2.
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.081493) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1310KB)], [53(12MB)]
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422036081520, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14795979, "oldest_snapshot_seqno": -1}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5367 keys, 12632405 bytes, temperature: kUnknown
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422036136973, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12632405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12596913, "index_size": 20982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137802, "raw_average_key_size": 25, "raw_value_size": 12500066, "raw_average_value_size": 2329, "num_data_blocks": 852, "num_entries": 5367, "num_filter_entries": 5367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.137349) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12632405 bytes
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.139047) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 266.5 rd, 227.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.8 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(20.4) write-amplify(9.4) OK, records in: 5887, records dropped: 520 output_compression: NoCompression
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.139066) EVENT_LOG_v1 {"time_micros": 1769422036139058, "job": 28, "event": "compaction_finished", "compaction_time_micros": 55521, "compaction_time_cpu_micros": 23837, "output_level": 6, "num_output_files": 1, "total_output_size": 12632405, "num_input_records": 5887, "num_output_records": 5367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422036139433, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422036141806, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.081407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.141848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.141853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.141855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.141857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:07:16 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:07:16.141859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:07:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:16.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.602 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.602 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.619 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 10:07:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:16] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Jan 26 10:07:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:16] "GET /metrics HTTP/1.1" 200 48469 "" "Prometheus/2.51.0"
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.702 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.702 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.708 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.708 254884 INFO nova.compute.claims [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Claim successful on node compute-0.ctlplane.example.com
Jan 26 10:07:16 compute-0 nova_compute[254880]: 2026-01-26 10:07:16.812 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:07:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:17.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:07:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:07:17 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480017378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.232 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.239 254884 DEBUG nova.compute.provider_tree [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.258 254884 DEBUG nova.scheduler.client.report [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.279 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.280 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 10:07:17 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/480017378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.332 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.332 254884 DEBUG nova.network.neutron [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.365 254884 INFO nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.387 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 10:07:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 4.2 KiB/s wr, 2 op/s
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.525 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.526 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.526 254884 INFO nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Creating image(s)
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.552 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.583 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.611 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.614 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "d81880e926e175d0cc7241caa7cc18231a8a289c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.615 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:17.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.903 254884 WARNING oslo_policy.policy [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.903 254884 WARNING oslo_policy.policy [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.905 254884 DEBUG nova.policy [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1208d3e25b940ea93fe76884c7a53db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 10:07:17 compute-0 nova_compute[254880]: 2026-01-26 10:07:17.967 254884 DEBUG nova.virt.libvirt.imagebackend [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image locations are: [{'url': 'rbd://1a70b85d-e3fd-5814-8a6a-37ea00fcae30/images/6789692f-fc1f-4efa-ae75-dcc13be695ef/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://1a70b85d-e3fd-5814-8a6a-37ea00fcae30/images/6789692f-fc1f-4efa-ae75-dcc13be695ef/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 10:07:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:18.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:18 compute-0 ceph-mon[74456]: pgmap v752: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 4.2 KiB/s wr, 2 op/s
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:07:18
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:07:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:07:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:07:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:07:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007596545956241453 of space, bias 1.0, pg target 0.22789637868724358 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.228 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.275 254884 DEBUG nova.network.neutron [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Successfully created port: 9e43222f-ece8-42ba-968c-6ed6feedb649 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.289 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.part --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.290 254884 DEBUG nova.virt.images [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] 6789692f-fc1f-4efa-ae75-dcc13be695ef was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.291 254884 DEBUG nova.privsep.utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.292 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.part /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.461 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.part /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.converted" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.465 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:07:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:07:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 4.2 KiB/s wr, 2 op/s
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.515 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.516 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.539 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.543 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.812 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100719 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.884 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] resizing rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 10:07:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:07:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:19.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:07:19 compute-0 nova_compute[254880]: 2026-01-26 10:07:19.988 254884 DEBUG nova.objects.instance [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.008 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.009 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Ensure instance console log exists: /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.009 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.009 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.010 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000014s ======
Jan 26 10:07:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:20.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Jan 26 10:07:20 compute-0 ceph-mon[74456]: pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 4.2 KiB/s wr, 2 op/s
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.828 254884 DEBUG nova.network.neutron [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Successfully updated port: 9e43222f-ece8-42ba-968c-6ed6feedb649 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 10:07:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.865 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.865 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquired lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:07:20 compute-0 nova_compute[254880]: 2026-01-26 10:07:20.865 254884 DEBUG nova.network.neutron [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 10:07:21 compute-0 nova_compute[254880]: 2026-01-26 10:07:21.134 254884 DEBUG nova.network.neutron [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 10:07:21 compute-0 nova_compute[254880]: 2026-01-26 10:07:21.336 254884 DEBUG nova.compute.manager [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received event network-changed-9e43222f-ece8-42ba-968c-6ed6feedb649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:07:21 compute-0 nova_compute[254880]: 2026-01-26 10:07:21.337 254884 DEBUG nova.compute.manager [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Refreshing instance network info cache due to event network-changed-9e43222f-ece8-42ba-968c-6ed6feedb649. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:07:21 compute-0 nova_compute[254880]: 2026-01-26 10:07:21.337 254884 DEBUG oslo_concurrency.lockutils [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:07:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:07:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:21.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.074 254884 DEBUG nova.network.neutron [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Updating instance_info_cache with network_info: [{"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.096 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Releasing lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.097 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Instance network_info: |[{"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.097 254884 DEBUG oslo_concurrency.lockutils [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.097 254884 DEBUG nova.network.neutron [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Refreshing network info cache for port 9e43222f-ece8-42ba-968c-6ed6feedb649 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.100 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Start _get_guest_xml network_info=[{"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'device_type': 'disk', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'image_id': '6789692f-fc1f-4efa-ae75-dcc13be695ef'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.104 254884 WARNING nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.108 254884 DEBUG nova.virt.libvirt.host [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.109 254884 DEBUG nova.virt.libvirt.host [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.112 254884 DEBUG nova.virt.libvirt.host [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.112 254884 DEBUG nova.virt.libvirt.host [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.113 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.113 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T10:05:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='57e1601b-dbfa-4d3b-8b96-27302e4a7a06',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.113 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.113 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.114 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.114 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.114 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.114 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.114 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.115 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.115 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.115 254884 DEBUG nova.virt.hardware [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.118 254884 DEBUG nova.privsep.utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.118 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:22.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:22 compute-0 sudo[260805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:07:22 compute-0 sudo[260805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:22 compute-0 sudo[260805]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:07:22 compute-0 podman[260829]: 2026-01-26 10:07:22.552998049 +0000 UTC m=+0.072185405 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:07:22 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:07:22 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886990296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.589 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:22 compute-0 ceph-mon[74456]: pgmap v754: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:07:22 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1886990296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.619 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:07:22 compute-0 nova_compute[254880]: 2026-01-26 10:07:22.622 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:07:23 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2925681421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.070 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.071 254884 DEBUG nova.virt.libvirt.vif [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:07:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1236230435',display_name='tempest-TestNetworkBasicOps-server-1236230435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1236230435',id=2,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTOs3BXjnN5+km+pnLH2Ek/lorLOv1RvQSyPovSAMkr1PMNI58K7B5CMpbJHI4DHjOvYyHXNzgdFUGarrhqe58ezYN8ulK/lRs2EXeW8gH8d4vZ1Z0yG61vGiMDueIFbg==',key_name='tempest-TestNetworkBasicOps-1126988904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-q0cadtx1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:07:17Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=5ac85101-7f84-4ad6-b66a-95cd2fdfcd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.072 254884 DEBUG nova.network.os_vif_util [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.073 254884 DEBUG nova.network.os_vif_util [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:20:58,bridge_name='br-int',has_traffic_filtering=True,id=9e43222f-ece8-42ba-968c-6ed6feedb649,network=Network(82a5dc98-3279-47e7-b5f8-a111d4ea33ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e43222f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.074 254884 DEBUG nova.objects.instance [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:07:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.127 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] End _get_guest_xml xml=<domain type="kvm">
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <uuid>5ac85101-7f84-4ad6-b66a-95cd2fdfcd14</uuid>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <name>instance-00000002</name>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <memory>131072</memory>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <vcpu>1</vcpu>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <nova:name>tempest-TestNetworkBasicOps-server-1236230435</nova:name>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <nova:creationTime>2026-01-26 10:07:22</nova:creationTime>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <nova:flavor name="m1.nano">
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:memory>128</nova:memory>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:disk>1</nova:disk>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:swap>0</nova:swap>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:vcpus>1</nova:vcpus>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       </nova:flavor>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <nova:owner>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       </nova:owner>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <nova:ports>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <nova:port uuid="9e43222f-ece8-42ba-968c-6ed6feedb649">
Jan 26 10:07:23 compute-0 nova_compute[254880]:           <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         </nova:port>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       </nova:ports>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </nova:instance>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <sysinfo type="smbios">
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <system>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <entry name="manufacturer">RDO</entry>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <entry name="product">OpenStack Compute</entry>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <entry name="serial">5ac85101-7f84-4ad6-b66a-95cd2fdfcd14</entry>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <entry name="uuid">5ac85101-7f84-4ad6-b66a-95cd2fdfcd14</entry>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <entry name="family">Virtual Machine</entry>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </system>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <os>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <boot dev="hd"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <smbios mode="sysinfo"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   </os>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <features>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <vmcoreinfo/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   </features>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <clock offset="utc">
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <timer name="hpet" present="no"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <cpu mode="host-model" match="exact">
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <disk type="network" device="disk">
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk">
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       </source>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <target dev="vda" bus="virtio"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <disk type="network" device="cdrom">
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk.config">
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       </source>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:07:23 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <target dev="sda" bus="sata"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <interface type="ethernet">
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <mac address="fa:16:3e:77:20:58"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <mtu size="1442"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <target dev="tap9e43222f-ec"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <serial type="pty">
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <log file="/var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/console.log" append="off"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <video>
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </video>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <input type="tablet" bus="usb"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <rng model="virtio">
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <backend model="random">/dev/urandom</backend>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <controller type="usb" index="0"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     <memballoon model="virtio">
Jan 26 10:07:23 compute-0 nova_compute[254880]:       <stats period="10"/>
Jan 26 10:07:23 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:07:23 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:07:23 compute-0 nova_compute[254880]: </domain>
Jan 26 10:07:23 compute-0 nova_compute[254880]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.128 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Preparing to wait for external event network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.128 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.128 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.128 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.129 254884 DEBUG nova.virt.libvirt.vif [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:07:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1236230435',display_name='tempest-TestNetworkBasicOps-server-1236230435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1236230435',id=2,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTOs3BXjnN5+km+pnLH2Ek/lorLOv1RvQSyPovSAMkr1PMNI58K7B5CMpbJHI4DHjOvYyHXNzgdFUGarrhqe58ezYN8ulK/lRs2EXeW8gH8d4vZ1Z0yG61vGiMDueIFbg==',key_name='tempest-TestNetworkBasicOps-1126988904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-q0cadtx1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:07:17Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=5ac85101-7f84-4ad6-b66a-95cd2fdfcd14,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.129 254884 DEBUG nova.network.os_vif_util [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.130 254884 DEBUG nova.network.os_vif_util [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:20:58,bridge_name='br-int',has_traffic_filtering=True,id=9e43222f-ece8-42ba-968c-6ed6feedb649,network=Network(82a5dc98-3279-47e7-b5f8-a111d4ea33ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e43222f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.130 254884 DEBUG os_vif [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:20:58,bridge_name='br-int',has_traffic_filtering=True,id=9e43222f-ece8-42ba-968c-6ed6feedb649,network=Network(82a5dc98-3279-47e7-b5f8-a111d4ea33ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e43222f-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.159 254884 DEBUG ovsdbapp.backend.ovs_idl [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.159 254884 DEBUG ovsdbapp.backend.ovs_idl [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.159 254884 DEBUG ovsdbapp.backend.ovs_idl [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.160 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.160 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.160 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.161 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.162 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.164 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.172 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.172 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.172 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.173 254884 INFO oslo.privsep.daemon [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpsc23rn3k/privsep.sock']
Jan 26 10:07:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.576 254884 DEBUG nova.network.neutron [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Updated VIF entry in instance network info cache for port 9e43222f-ece8-42ba-968c-6ed6feedb649. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.577 254884 DEBUG nova.network.neutron [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Updating instance_info_cache with network_info: [{"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.596 254884 DEBUG oslo_concurrency.lockutils [req-51f75335-2f22-4d9c-a128-9909bff235b3 req-9df01e93-ad87-4f78-8612-4ab52ef271d6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:07:23 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2925681421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:07:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.877 254884 INFO oslo.privsep.daemon [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Spawned new privsep daemon via rootwrap
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.754 260902 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.760 260902 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.762 260902 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 26 10:07:23 compute-0 nova_compute[254880]: 2026-01-26 10:07:23.763 260902 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260902
Jan 26 10:07:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.046 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.201 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.202 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e43222f-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.203 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e43222f-ec, col_values=(('external_ids', {'iface-id': '9e43222f-ece8-42ba-968c-6ed6feedb649', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:20:58', 'vm-uuid': '5ac85101-7f84-4ad6-b66a-95cd2fdfcd14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.204 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:24 compute-0 NetworkManager[48970]: <info>  [1769422044.2053] manager: (tap9e43222f-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.206 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.214 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.214 254884 INFO os_vif [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:20:58,bridge_name='br-int',has_traffic_filtering=True,id=9e43222f-ece8-42ba-968c-6ed6feedb649,network=Network(82a5dc98-3279-47e7-b5f8-a111d4ea33ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e43222f-ec')
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.257 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.258 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.258 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No VIF found with MAC fa:16:3e:77:20:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.258 254884 INFO nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Using config drive
Jan 26 10:07:24 compute-0 nova_compute[254880]: 2026-01-26 10:07:24.284 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:07:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:24.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:24 compute-0 ceph-mon[74456]: pgmap v755: 353 pgs: 353 active+clean; 167 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 26 10:07:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100724 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 26 10:07:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.121 254884 INFO nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Creating config drive at /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/disk.config
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.126 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp77htkvro execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.266 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp77htkvro" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.300 254884 DEBUG nova.storage.rbd_utils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.306 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/disk.config 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.475 254884 DEBUG oslo_concurrency.processutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/disk.config 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.476 254884 INFO nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Deleting local config drive /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14/disk.config because it was imported into RBD.
Jan 26 10:07:25 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 26 10:07:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 26 10:07:25 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 26 10:07:25 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 26 10:07:25 compute-0 kernel: tap9e43222f-ec: entered promiscuous mode
Jan 26 10:07:25 compute-0 NetworkManager[48970]: <info>  [1769422045.5875] manager: (tap9e43222f-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 26 10:07:25 compute-0 ovn_controller[155832]: 2026-01-26T10:07:25Z|00027|binding|INFO|Claiming lport 9e43222f-ece8-42ba-968c-6ed6feedb649 for this chassis.
Jan 26 10:07:25 compute-0 ovn_controller[155832]: 2026-01-26T10:07:25Z|00028|binding|INFO|9e43222f-ece8-42ba-968c-6ed6feedb649: Claiming fa:16:3e:77:20:58 10.100.0.22
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.589 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:25 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:25.604 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:20:58 10.100.0.22'], port_security=['fa:16:3e:77:20:58 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '5ac85101-7f84-4ad6-b66a-95cd2fdfcd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9437f473-f5d5-4abb-a3a1-691a33bf3b29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30cd5c01-d547-4b4e-a8ed-aeb208f30737, chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=9e43222f-ece8-42ba-968c-6ed6feedb649) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:07:25 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:25.606 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 9e43222f-ece8-42ba-968c-6ed6feedb649 in datapath 82a5dc98-3279-47e7-b5f8-a111d4ea33ff bound to our chassis
Jan 26 10:07:25 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:25.608 166625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82a5dc98-3279-47e7-b5f8-a111d4ea33ff
Jan 26 10:07:25 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:25.609 166625 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpae5yrpwt/privsep.sock']
Jan 26 10:07:25 compute-0 systemd-udevd[261004]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:07:25 compute-0 systemd-machined[221254]: New machine qemu-1-instance-00000002.
Jan 26 10:07:25 compute-0 NetworkManager[48970]: <info>  [1769422045.6426] device (tap9e43222f-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 10:07:25 compute-0 NetworkManager[48970]: <info>  [1769422045.6434] device (tap9e43222f-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.645 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.652 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:25 compute-0 ovn_controller[155832]: 2026-01-26T10:07:25Z|00029|binding|INFO|Setting lport 9e43222f-ece8-42ba-968c-6ed6feedb649 ovn-installed in OVS
Jan 26 10:07:25 compute-0 ovn_controller[155832]: 2026-01-26T10:07:25Z|00030|binding|INFO|Setting lport 9e43222f-ece8-42ba-968c-6ed6feedb649 up in Southbound
Jan 26 10:07:25 compute-0 nova_compute[254880]: 2026-01-26 10:07:25.654 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:25 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Jan 26 10:07:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:25.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.200 254884 DEBUG nova.compute.manager [req-485f0ad8-bab0-484b-a9ae-0fe0526ff7d6 req-ad687dbb-7bdb-48e2-8c6a-dfa00f06fecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received event network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.200 254884 DEBUG oslo_concurrency.lockutils [req-485f0ad8-bab0-484b-a9ae-0fe0526ff7d6 req-ad687dbb-7bdb-48e2-8c6a-dfa00f06fecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.200 254884 DEBUG oslo_concurrency.lockutils [req-485f0ad8-bab0-484b-a9ae-0fe0526ff7d6 req-ad687dbb-7bdb-48e2-8c6a-dfa00f06fecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.200 254884 DEBUG oslo_concurrency.lockutils [req-485f0ad8-bab0-484b-a9ae-0fe0526ff7d6 req-ad687dbb-7bdb-48e2-8c6a-dfa00f06fecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.201 254884 DEBUG nova.compute.manager [req-485f0ad8-bab0-484b-a9ae-0fe0526ff7d6 req-ad687dbb-7bdb-48e2-8c6a-dfa00f06fecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Processing event network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.265 166625 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.266 166625 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpae5yrpwt/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.147 261020 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.150 261020 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.152 261020 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.152 261020 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261020
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.269 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e3250fbb-b948-4019-80ca-2f64a42cb53f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:26.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:26 compute-0 ceph-mon[74456]: pgmap v756: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 26 10:07:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:26] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:07:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:26] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.833 261020 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.833 261020 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:26.833 261020 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.993 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422046.9924173, 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.994 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] VM Started (Lifecycle Event)
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.996 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 10:07:26 compute-0 nova_compute[254880]: 2026-01-26 10:07:26.999 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.008 254884 INFO nova.virt.libvirt.driver [-] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Instance spawned successfully.
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.009 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 10:07:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:27.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.145 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.149 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.152 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.152 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.153 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.153 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.154 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.155 254884 DEBUG nova.virt.libvirt.driver [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.184 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.185 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422046.9925506, 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.185 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] VM Paused (Lifecycle Event)
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.220 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.225 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422046.9985077, 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.225 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] VM Resumed (Lifecycle Event)
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.229 254884 INFO nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Took 9.70 seconds to spawn the instance on the hypervisor.
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.230 254884 DEBUG nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.254 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.258 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:07:27 compute-0 sudo[261069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.279 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:07:27 compute-0 sudo[261069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:27 compute-0 sudo[261069]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.291 254884 INFO nova.compute.manager [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Took 10.61 seconds to build instance.
Jan 26 10:07:27 compute-0 nova_compute[254880]: 2026-01-26 10:07:27.314 254884 DEBUG oslo_concurrency.lockutils [None req-e445b2ce-ad35-41a7-9da0-a0a14e681965 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:27 compute-0 sudo[261094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 10:07:27 compute-0 sudo[261094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.435 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[09ca4606-ddb3-4ecd-9f67-6efcd9b1a473]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.437 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82a5dc98-31 in ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.438 261020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82a5dc98-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.438 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b18773-7d2a-44a8-807f-c84a48da3cea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.441 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cd3c7b-e737-40d3-9c83-8b4474f11e5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.461 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[392059e6-4144-460c-a799-dab2cda57a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.475 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[2270225d-59e1-4bef-a8aa-388a7017e626]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:27.477 166625 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpayx5et2c/privsep.sock']
Jan 26 10:07:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:07:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3153713550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:27 compute-0 podman[261198]: 2026-01-26 10:07:27.871455953 +0000 UTC m=+0.063397277 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:07:27 compute-0 sshd-session[261139]: Invalid user postgres from 157.245.76.178 port 45740
Jan 26 10:07:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:27.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:27 compute-0 podman[261198]: 2026-01-26 10:07:27.968946557 +0000 UTC m=+0.160887861 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 10:07:27 compute-0 sshd-session[261139]: Connection closed by invalid user postgres 157.245.76.178 port 45740 [preauth]
Jan 26 10:07:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.269 166625 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.270 166625 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpayx5et2c/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.107 261249 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.112 261249 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.114 261249 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.114 261249 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261249
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.272 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[b49eea6a-82e8-4df9-85a3-18e6bba4c8f7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.291 254884 DEBUG nova.compute.manager [req-a0eed941-224b-43a6-bf9e-33490d71f45e req-31713a8c-3e9c-4f8e-8623-d9f2c7b866e3 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received event network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.292 254884 DEBUG oslo_concurrency.lockutils [req-a0eed941-224b-43a6-bf9e-33490d71f45e req-31713a8c-3e9c-4f8e-8623-d9f2c7b866e3 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.292 254884 DEBUG oslo_concurrency.lockutils [req-a0eed941-224b-43a6-bf9e-33490d71f45e req-31713a8c-3e9c-4f8e-8623-d9f2c7b866e3 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.292 254884 DEBUG oslo_concurrency.lockutils [req-a0eed941-224b-43a6-bf9e-33490d71f45e req-31713a8c-3e9c-4f8e-8623-d9f2c7b866e3 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.293 254884 DEBUG nova.compute.manager [req-a0eed941-224b-43a6-bf9e-33490d71f45e req-31713a8c-3e9c-4f8e-8623-d9f2c7b866e3 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] No waiting events found dispatching network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.293 254884 WARNING nova.compute.manager [req-a0eed941-224b-43a6-bf9e-33490d71f45e req-31713a8c-3e9c-4f8e-8623-d9f2c7b866e3 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received unexpected event network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 for instance with vm_state active and task_state None.
Jan 26 10:07:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:28.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:28 compute-0 podman[261316]: 2026-01-26 10:07:28.432059342 +0000 UTC m=+0.060027776 container exec 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:07:28 compute-0 podman[261316]: 2026-01-26 10:07:28.4635704 +0000 UTC m=+0.091538804 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:07:28 compute-0 ceph-mon[74456]: pgmap v757: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:07:28 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2994227211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:28 compute-0 podman[261409]: 2026-01-26 10:07:28.77818216 +0000 UTC m=+0.065814504 container exec a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 10:07:28 compute-0 podman[261409]: 2026-01-26 10:07:28.792560551 +0000 UTC m=+0.080192865 container exec_died a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.845 261249 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.845 261249 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:28 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:28.845 261249 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.954 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:28 compute-0 nova_compute[254880]: 2026-01-26 10:07:28.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:29 compute-0 podman[261474]: 2026-01-26 10:07:29.008699773 +0000 UTC m=+0.069339538 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 10:07:29 compute-0 podman[261474]: 2026-01-26 10:07:29.014630934 +0000 UTC m=+0.075270669 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.047 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:29 compute-0 podman[261540]: 2026-01-26 10:07:29.20443476 +0000 UTC m=+0.053030870 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived)
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.205 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:29 compute-0 podman[261540]: 2026-01-26 10:07:29.235235591 +0000 UTC m=+0.083831701 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.component=keepalived-container, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 10:07:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.518 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[772d65a3-fcee-4def-94e0-ef9d0b693d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 NetworkManager[48970]: <info>  [1769422049.5376] manager: (tap82a5dc98-30): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.540 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[f7977bc0-f4e9-4401-b523-a00298e6beea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 systemd-udevd[261620]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.566 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4f3bd0-dd38-4e25-907c-31e26dec2c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.570 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[54ad0628-4cd7-436f-b737-a388c5394619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 NetworkManager[48970]: <info>  [1769422049.5959] device (tap82a5dc98-30): carrier: link connected
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.599 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[b6bc5588-04ca-4193-8a3e-259c5d7df169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.620 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[a87d2343-3a1e-4f47-ba23-d5ab76cb9f59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a5dc98-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:7c:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400325, 'reachable_time': 33286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261639, 'error': None, 'target': 'ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.637 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[003d10e4-3408-441a-b54f-1092d49ace9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4a:7c90'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400325, 'tstamp': 400325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261640, 'error': None, 'target': 'ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.655 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb06434-d15e-4b5c-b4b6-28d22fcf3585]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82a5dc98-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4a:7c:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400325, 'reachable_time': 33286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261641, 'error': None, 'target': 'ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.683 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[ce254235-18c3-4321-bc31-2032e4223eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.730 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b9f84e-fd90-4b7c-8587-2e59d972b7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.732 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a5dc98-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.732 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.733 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82a5dc98-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.734 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:29 compute-0 kernel: tap82a5dc98-30: entered promiscuous mode
Jan 26 10:07:29 compute-0 NetworkManager[48970]: <info>  [1769422049.7368] manager: (tap82a5dc98-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.736 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.739 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82a5dc98-30, col_values=(('external_ids', {'iface-id': 'f135fcb4-0aaa-4245-b861-b202d1bc430c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.740 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:29 compute-0 ovn_controller[155832]: 2026-01-26T10:07:29Z|00031|binding|INFO|Releasing lport f135fcb4-0aaa-4245-b861-b202d1bc430c from this chassis (sb_readonly=0)
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.741 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.743 166625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82a5dc98-3279-47e7-b5f8-a111d4ea33ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82a5dc98-3279-47e7-b5f8-a111d4ea33ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.744 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[bf14b10c-b069-4efa-a1d7-ff5a81cff430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.745 166625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: global
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     log         /dev/log local0 debug
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     log-tag     haproxy-metadata-proxy-82a5dc98-3279-47e7-b5f8-a111d4ea33ff
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     user        root
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     group       root
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     maxconn     1024
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     pidfile     /var/lib/neutron/external/pids/82a5dc98-3279-47e7-b5f8-a111d4ea33ff.pid.haproxy
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     daemon
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: defaults
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     log global
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     mode http
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     option httplog
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     option dontlognull
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     option http-server-close
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     option forwardfor
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     retries                 3
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     timeout http-request    30s
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     timeout connect         30s
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     timeout client          32s
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     timeout server          32s
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     timeout http-keep-alive 30s
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: listen listener
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     bind 169.254.169.254:80
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:     http-request add-header X-OVN-Network-ID 82a5dc98-3279-47e7-b5f8-a111d4ea33ff
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 10:07:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:29.746 166625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'env', 'PROCESS_TAG=haproxy-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82a5dc98-3279-47e7-b5f8-a111d4ea33ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 10:07:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.754 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:29.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:07:29 compute-0 nova_compute[254880]: 2026-01-26 10:07:29.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:07:30 compute-0 podman[261600]: 2026-01-26 10:07:30.039316321 +0000 UTC m=+0.644674857 container exec c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:07:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:30.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:30 compute-0 ceph-mon[74456]: pgmap v758: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:07:30 compute-0 podman[261600]: 2026-01-26 10:07:30.463714727 +0000 UTC m=+1.069073243 container exec_died c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:07:30 compute-0 nova_compute[254880]: 2026-01-26 10:07:30.580 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:07:30 compute-0 nova_compute[254880]: 2026-01-26 10:07:30.581 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquired lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:07:30 compute-0 nova_compute[254880]: 2026-01-26 10:07:30.581 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 10:07:30 compute-0 nova_compute[254880]: 2026-01-26 10:07:30.581 254884 DEBUG nova.objects.instance [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:07:30 compute-0 podman[261684]: 2026-01-26 10:07:30.601407206 +0000 UTC m=+0.538378713 container create 942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 10:07:30 compute-0 podman[261684]: 2026-01-26 10:07:30.519744567 +0000 UTC m=+0.456716104 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 10:07:30 compute-0 systemd[1]: Started libpod-conmon-942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060.scope.
Jan 26 10:07:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc29809c242388021b7b334ce027d237638e57937fb07c975db2b13b8498d9b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:30 compute-0 podman[261684]: 2026-01-26 10:07:30.725683166 +0000 UTC m=+0.662654673 container init 942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:07:30 compute-0 podman[261684]: 2026-01-26 10:07:30.73172991 +0000 UTC m=+0.668701417 container start 942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 10:07:30 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [NOTICE]   (261761) : New worker (261763) forked
Jan 26 10:07:30 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [NOTICE]   (261761) : Loading success.
Jan 26 10:07:30 compute-0 podman[261745]: 2026-01-26 10:07:30.781692096 +0000 UTC m=+0.087161330 container exec ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 10:07:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:30 compute-0 podman[261745]: 2026-01-26 10:07:30.970775405 +0000 UTC m=+0.276244639 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 10:07:31 compute-0 podman[261870]: 2026-01-26 10:07:31.370979717 +0000 UTC m=+0.057947247 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:07:31 compute-0 podman[261870]: 2026-01-26 10:07:31.418346031 +0000 UTC m=+0.105313551 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:07:31 compute-0 sudo[261094]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:07:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:07:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 26 10:07:31 compute-0 sudo[261914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:07:31 compute-0 sudo[261914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:31 compute-0 sudo[261914]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:31 compute-0 sudo[261939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:07:31 compute-0 sudo[261939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.744 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Updating instance_info_cache with network_info: [{"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:07:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.768 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Releasing lock "refresh_cache-5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.769 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.769 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.770 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.770 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.770 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.789 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.789 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.790 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.790 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:07:31 compute-0 nova_compute[254880]: 2026-01-26 10:07:31.790 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:31.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:32 compute-0 sudo[261939]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3858921524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.286 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3185] manager: (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3193] device (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.318 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <warn>  [1769422052.3195] device (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3206] manager: (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3208] device (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <warn>  [1769422052.3209] device (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3215] manager: (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3219] manager: (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3222] device (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 10:07:32 compute-0 NetworkManager[48970]: <info>  [1769422052.3224] device (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:07:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:07:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:32.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:32 compute-0 sudo[262018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:07:32 compute-0 sudo[262018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:32 compute-0 sudo[262018]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.390 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.390 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.399 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:32 compute-0 ovn_controller[155832]: 2026-01-26T10:07:32Z|00032|binding|INFO|Releasing lport f135fcb4-0aaa-4245-b861-b202d1bc430c from this chassis (sb_readonly=0)
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.411 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:32 compute-0 sudo[262044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:07:32 compute-0 sudo[262044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.554 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.555 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4412MB free_disk=59.92194747924805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.556 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.556 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:32 compute-0 ceph-mon[74456]: pgmap v759: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3033423951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3858921524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:07:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/521770293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.773 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Instance 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.774 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.775 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:07:32 compute-0 nova_compute[254880]: 2026-01-26 10:07:32.811 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:32 compute-0 podman[262114]: 2026-01-26 10:07:32.854290932 +0000 UTC m=+0.052746252 container create f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_hoover, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:07:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:32 compute-0 systemd[1]: Started libpod-conmon-f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f.scope.
Jan 26 10:07:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:07:32 compute-0 podman[262114]: 2026-01-26 10:07:32.829488434 +0000 UTC m=+0.027943754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:07:32 compute-0 podman[262114]: 2026-01-26 10:07:32.944173377 +0000 UTC m=+0.142628707 container init f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:07:32 compute-0 podman[262114]: 2026-01-26 10:07:32.950995668 +0000 UTC m=+0.149450988 container start f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_hoover, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 26 10:07:32 compute-0 podman[262114]: 2026-01-26 10:07:32.954793689 +0000 UTC m=+0.153249039 container attach f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_hoover, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:07:32 compute-0 exciting_hoover[262132]: 167 167
Jan 26 10:07:32 compute-0 systemd[1]: libpod-f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f.scope: Deactivated successfully.
Jan 26 10:07:32 compute-0 conmon[262132]: conmon f3ca42482d2a0e497836 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f.scope/container/memory.events
Jan 26 10:07:32 compute-0 podman[262114]: 2026-01-26 10:07:32.959638364 +0000 UTC m=+0.158093694 container died f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_hoover, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0111c66026a1e538953af3eec317aa5fb2b72d930c9321a501dbd74a34151f9-merged.mount: Deactivated successfully.
Jan 26 10:07:32 compute-0 podman[262114]: 2026-01-26 10:07:32.999137622 +0000 UTC m=+0.197592942 container remove f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 10:07:33 compute-0 systemd[1]: libpod-conmon-f3ca42482d2a0e497836edb136d38538db3d3a39e909043706baeefc379c792f.scope: Deactivated successfully.
Jan 26 10:07:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:33 compute-0 podman[262174]: 2026-01-26 10:07:33.157812399 +0000 UTC m=+0.042517671 container create 0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:07:33 compute-0 systemd[1]: Started libpod-conmon-0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f.scope.
Jan 26 10:07:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5aa427b6e20e3d0cf5c881a5c9cf8c880e742f1b22299a999f4d66c7af1c99c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5aa427b6e20e3d0cf5c881a5c9cf8c880e742f1b22299a999f4d66c7af1c99c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5aa427b6e20e3d0cf5c881a5c9cf8c880e742f1b22299a999f4d66c7af1c99c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5aa427b6e20e3d0cf5c881a5c9cf8c880e742f1b22299a999f4d66c7af1c99c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5aa427b6e20e3d0cf5c881a5c9cf8c880e742f1b22299a999f4d66c7af1c99c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:33 compute-0 podman[262174]: 2026-01-26 10:07:33.139267859 +0000 UTC m=+0.023973171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:07:33 compute-0 podman[262174]: 2026-01-26 10:07:33.239791355 +0000 UTC m=+0.124496657 container init 0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 10:07:33 compute-0 podman[262174]: 2026-01-26 10:07:33.245181753 +0000 UTC m=+0.129887035 container start 0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 10:07:33 compute-0 podman[262174]: 2026-01-26 10:07:33.247955309 +0000 UTC m=+0.132660601 container attach 0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:07:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:07:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350250843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.283 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.289 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.329 254884 ERROR nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [req-f3b9f942-9205-4b3c-929d-dbd5416f99de] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 0dd9ba26-1c92-4319-953d-4e0ed59143cf.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-f3b9f942-9205-4b3c-929d-dbd5416f99de"}]}
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.343 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing inventories for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.360 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating ProviderTree inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.360 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.381 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing aggregate associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.404 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing trait associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.447 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 82 op/s
Jan 26 10:07:33 compute-0 sweet_varahamihira[262190]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:07:33 compute-0 sweet_varahamihira[262190]: --> All data devices are unavailable
Jan 26 10:07:33 compute-0 systemd[1]: libpod-0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f.scope: Deactivated successfully.
Jan 26 10:07:33 compute-0 podman[262174]: 2026-01-26 10:07:33.572542915 +0000 UTC m=+0.457248197 container died 0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 10:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5aa427b6e20e3d0cf5c881a5c9cf8c880e742f1b22299a999f4d66c7af1c99c-merged.mount: Deactivated successfully.
Jan 26 10:07:33 compute-0 podman[262174]: 2026-01-26 10:07:33.610779363 +0000 UTC m=+0.495484645 container remove 0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 10:07:33 compute-0 sudo[262044]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:33 compute-0 systemd[1]: libpod-conmon-0d0217d5bdc1e4aa09652b67349dd2488195f6db9a8e8981a7c208591bf5c32f.scope: Deactivated successfully.
Jan 26 10:07:33 compute-0 sudo[262240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:07:33 compute-0 sudo[262240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:33 compute-0 sudo[262240]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:07:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3350250843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1049851893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:33 compute-0 sudo[262265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:07:33 compute-0 sudo[262265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100733 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:07:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:07:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745198742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:33.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.921 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.927 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.974 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updated inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.974 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.975 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.997 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:07:33 compute-0 nova_compute[254880]: 2026-01-26 10:07:33.998 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:34 compute-0 nova_compute[254880]: 2026-01-26 10:07:34.096 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:34 compute-0 podman[262333]: 2026-01-26 10:07:34.201553759 +0000 UTC m=+0.037812479 container create 2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 10:07:34 compute-0 nova_compute[254880]: 2026-01-26 10:07:34.208 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:34 compute-0 systemd[1]: Started libpod-conmon-2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40.scope.
Jan 26 10:07:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:07:34 compute-0 podman[262333]: 2026-01-26 10:07:34.270068206 +0000 UTC m=+0.106326936 container init 2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:07:34 compute-0 podman[262333]: 2026-01-26 10:07:34.276795915 +0000 UTC m=+0.113054635 container start 2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:07:34 compute-0 podman[262333]: 2026-01-26 10:07:34.184684198 +0000 UTC m=+0.020942938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:07:34 compute-0 podman[262333]: 2026-01-26 10:07:34.280124394 +0000 UTC m=+0.116383144 container attach 2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:07:34 compute-0 wizardly_hugle[262349]: 167 167
Jan 26 10:07:34 compute-0 systemd[1]: libpod-2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40.scope: Deactivated successfully.
Jan 26 10:07:34 compute-0 podman[262333]: 2026-01-26 10:07:34.281776283 +0000 UTC m=+0.118035003 container died 2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 26 10:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-18f4632e0244248c87b3fa8c5ccf1fefd733738033ca20ae04a2dcd44a5fa11a-merged.mount: Deactivated successfully.
Jan 26 10:07:34 compute-0 podman[262333]: 2026-01-26 10:07:34.315297359 +0000 UTC m=+0.151556079 container remove 2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_hugle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 10:07:34 compute-0 systemd[1]: libpod-conmon-2beef31264466737112e588fb4b9cc785639f3c280efd06fe6347f4fadc47d40.scope: Deactivated successfully.
Jan 26 10:07:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:34.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:34 compute-0 podman[262375]: 2026-01-26 10:07:34.480122523 +0000 UTC m=+0.042527452 container create 446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 10:07:34 compute-0 systemd[1]: Started libpod-conmon-446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1.scope.
Jan 26 10:07:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7514c51c8557cb57621aec5734d9fdef37753667ca78840da69dea6b1b75c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7514c51c8557cb57621aec5734d9fdef37753667ca78840da69dea6b1b75c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7514c51c8557cb57621aec5734d9fdef37753667ca78840da69dea6b1b75c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd7514c51c8557cb57621aec5734d9fdef37753667ca78840da69dea6b1b75c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:34 compute-0 podman[262375]: 2026-01-26 10:07:34.459349189 +0000 UTC m=+0.021754138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:07:34 compute-0 podman[262375]: 2026-01-26 10:07:34.570781294 +0000 UTC m=+0.133186253 container init 446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:07:34 compute-0 podman[262375]: 2026-01-26 10:07:34.576655794 +0000 UTC m=+0.139060723 container start 446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_heisenberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:07:34 compute-0 podman[262375]: 2026-01-26 10:07:34.58026146 +0000 UTC m=+0.142666419 container attach 446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_heisenberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:07:34 compute-0 ceph-mon[74456]: pgmap v760: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 82 op/s
Jan 26 10:07:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2745198742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]: {
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:     "0": [
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:         {
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "devices": [
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "/dev/loop3"
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             ],
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "lv_name": "ceph_lv0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "lv_size": "21470642176",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "name": "ceph_lv0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "tags": {
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.cluster_name": "ceph",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.crush_device_class": "",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.encrypted": "0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.osd_id": "0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.type": "block",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.vdo": "0",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:                 "ceph.with_tpm": "0"
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             },
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "type": "block",
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:             "vg_name": "ceph_vg0"
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:         }
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]:     ]
Jan 26 10:07:34 compute-0 funny_heisenberg[262392]: }
Jan 26 10:07:34 compute-0 systemd[1]: libpod-446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1.scope: Deactivated successfully.
Jan 26 10:07:34 compute-0 podman[262375]: 2026-01-26 10:07:34.854156393 +0000 UTC m=+0.416561322 container died 446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_heisenberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 26 10:07:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:35 compute-0 nova_compute[254880]: 2026-01-26 10:07:35.186 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:35 compute-0 nova_compute[254880]: 2026-01-26 10:07:35.231 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:35 compute-0 nova_compute[254880]: 2026-01-26 10:07:35.232 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:07:35 compute-0 nova_compute[254880]: 2026-01-26 10:07:35.233 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:07:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 83 op/s
Jan 26 10:07:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd7514c51c8557cb57621aec5734d9fdef37753667ca78840da69dea6b1b75c8-merged.mount: Deactivated successfully.
Jan 26 10:07:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:36.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:36] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:07:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:36] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:07:36 compute-0 ceph-mon[74456]: pgmap v761: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 83 op/s
Jan 26 10:07:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:37 compute-0 podman[262375]: 2026-01-26 10:07:37.09906509 +0000 UTC m=+2.661470019 container remove 446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_heisenberg, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:07:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:37.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:07:37 compute-0 systemd[1]: libpod-conmon-446ac20448ebd4b490f6e34fa0b22fe473d26c5493c9a7cde563c531a9aa3fe1.scope: Deactivated successfully.
Jan 26 10:07:37 compute-0 sudo[262265]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:37 compute-0 sudo[262418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:07:37 compute-0 sudo[262418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:37 compute-0 sudo[262418]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:37 compute-0 sudo[262443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:07:37 compute-0 sudo[262443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 10:07:37 compute-0 podman[262509]: 2026-01-26 10:07:37.702264841 +0000 UTC m=+0.053548322 container create 5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_noyce, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:07:37 compute-0 systemd[1]: Started libpod-conmon-5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb.scope.
Jan 26 10:07:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001510 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:37 compute-0 podman[262509]: 2026-01-26 10:07:37.6727618 +0000 UTC m=+0.024045301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:07:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:07:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:37.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:37 compute-0 podman[262509]: 2026-01-26 10:07:37.930252484 +0000 UTC m=+0.281535995 container init 5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_noyce, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:07:37 compute-0 podman[262509]: 2026-01-26 10:07:37.944145954 +0000 UTC m=+0.295429435 container start 5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:07:37 compute-0 podman[262509]: 2026-01-26 10:07:37.947992024 +0000 UTC m=+0.299275525 container attach 5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_noyce, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:07:37 compute-0 sweet_noyce[262525]: 167 167
Jan 26 10:07:37 compute-0 systemd[1]: libpod-5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb.scope: Deactivated successfully.
Jan 26 10:07:37 compute-0 podman[262509]: 2026-01-26 10:07:37.95033153 +0000 UTC m=+0.301615031 container died 5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 10:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aa5aa73e301bcef62f41d157eed1b893ac3877e74a1cabf7956c683e3d9c6ae-merged.mount: Deactivated successfully.
Jan 26 10:07:37 compute-0 ceph-mon[74456]: pgmap v762: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 10:07:38 compute-0 podman[262509]: 2026-01-26 10:07:38.047022586 +0000 UTC m=+0.398306067 container remove 5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:07:38 compute-0 systemd[1]: libpod-conmon-5284200d62431c015540289f613712d7bb793b51960dc3b3308bceef4a3c03cb.scope: Deactivated successfully.
Jan 26 10:07:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:38 compute-0 podman[262549]: 2026-01-26 10:07:38.226796884 +0000 UTC m=+0.048371470 container create d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:07:38 compute-0 systemd[1]: Started libpod-conmon-d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3.scope.
Jan 26 10:07:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3523cce7fe9e61cb6a3a50fbb82db5718983de128a5dc5eaf5ff09c26af9ee39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3523cce7fe9e61cb6a3a50fbb82db5718983de128a5dc5eaf5ff09c26af9ee39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3523cce7fe9e61cb6a3a50fbb82db5718983de128a5dc5eaf5ff09c26af9ee39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3523cce7fe9e61cb6a3a50fbb82db5718983de128a5dc5eaf5ff09c26af9ee39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:07:38 compute-0 podman[262549]: 2026-01-26 10:07:38.202512677 +0000 UTC m=+0.024087293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:07:38 compute-0 podman[262549]: 2026-01-26 10:07:38.305355169 +0000 UTC m=+0.126929785 container init d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 10:07:38 compute-0 podman[262549]: 2026-01-26 10:07:38.314890135 +0000 UTC m=+0.136464711 container start d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wu, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 10:07:38 compute-0 podman[262549]: 2026-01-26 10:07:38.340142894 +0000 UTC m=+0.161717500 container attach d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:07:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:38 compute-0 podman[262564]: 2026-01-26 10:07:38.373118928 +0000 UTC m=+0.098826208 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 10:07:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:38 compute-0 lvm[262662]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:07:38 compute-0 lvm[262662]: VG ceph_vg0 finished
Jan 26 10:07:38 compute-0 amazing_wu[262567]: {}
Jan 26 10:07:39 compute-0 systemd[1]: libpod-d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3.scope: Deactivated successfully.
Jan 26 10:07:39 compute-0 systemd[1]: libpod-d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3.scope: Consumed 1.046s CPU time.
Jan 26 10:07:39 compute-0 podman[262549]: 2026-01-26 10:07:39.006856394 +0000 UTC m=+0.828430980 container died d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:07:39 compute-0 nova_compute[254880]: 2026-01-26 10:07:39.108 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3523cce7fe9e61cb6a3a50fbb82db5718983de128a5dc5eaf5ff09c26af9ee39-merged.mount: Deactivated successfully.
Jan 26 10:07:39 compute-0 podman[262549]: 2026-01-26 10:07:39.178296424 +0000 UTC m=+0.999871010 container remove d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_wu, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:07:39 compute-0 systemd[1]: libpod-conmon-d7a6f69142891a6dce5e2bf620aae9926002a70f4472fa4780fdc471a672a8a3.scope: Deactivated successfully.
Jan 26 10:07:39 compute-0 nova_compute[254880]: 2026-01-26 10:07:39.209 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:39 compute-0 sudo[262443]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:07:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 10:07:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:07:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001510 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:39.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:39 compute-0 sudo[262680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:07:39 compute-0 sudo[262680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:39 compute-0 sudo[262680]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:40.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:40 compute-0 ceph-mon[74456]: pgmap v763: 353 pgs: 353 active+clean; 167 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 10:07:40 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:40 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:07:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:41 compute-0 ovn_controller[155832]: 2026-01-26T10:07:41Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:20:58 10.100.0.22
Jan 26 10:07:41 compute-0 ovn_controller[155832]: 2026-01-26T10:07:41Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:20:58 10.100.0.22
Jan 26 10:07:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 179 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Jan 26 10:07:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:41.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:42.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:42 compute-0 sudo[262710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:07:42 compute-0 sudo[262710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:07:42 compute-0 sudo[262710]: pam_unix(sudo:session): session closed for user root
Jan 26 10:07:42 compute-0 ceph-mon[74456]: pgmap v764: 353 pgs: 353 active+clean; 179 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Jan 26 10:07:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:07:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 179 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Jan 26 10:07:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:43.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:44 compute-0 nova_compute[254880]: 2026-01-26 10:07:44.110 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:44 compute-0 nova_compute[254880]: 2026-01-26 10:07:44.211 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:44.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:07:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:45.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:07:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:07:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:07:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:46.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:46 compute-0 ceph-mon[74456]: pgmap v765: 353 pgs: 353 active+clean; 179 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 19 op/s
Jan 26 10:07:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:46] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:07:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:46] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:07:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:47.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:07:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:07:47 compute-0 ceph-mon[74456]: pgmap v766: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:07:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240023c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:07:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:47.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:07:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:48.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:07:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:07:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:07:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:07:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:07:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:07:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:07:48 compute-0 ceph-mon[74456]: pgmap v767: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:07:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:07:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:49 compute-0 nova_compute[254880]: 2026-01-26 10:07:49.113 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:49 compute-0 nova_compute[254880]: 2026-01-26 10:07:49.213 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 26 10:07:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:07:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002560 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:49.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:50 compute-0 ceph-mon[74456]: pgmap v768: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:07:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 10:07:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38008dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:51 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:51.900 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:07:51 compute-0 nova_compute[254880]: 2026-01-26 10:07:51.901 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:51 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:51.902 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:07:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:51.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:07:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:52.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.801 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.801 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.803 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.803 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.803 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:52 compute-0 ceph-mon[74456]: pgmap v769: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.805 254884 INFO nova.compute.manager [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Terminating instance
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.806 254884 DEBUG nova.compute.manager [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 10:07:52 compute-0 kernel: tap9e43222f-ec (unregistering): left promiscuous mode
Jan 26 10:07:52 compute-0 NetworkManager[48970]: <info>  [1769422072.8521] device (tap9e43222f-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 10:07:52 compute-0 ovn_controller[155832]: 2026-01-26T10:07:52Z|00033|binding|INFO|Releasing lport 9e43222f-ece8-42ba-968c-6ed6feedb649 from this chassis (sb_readonly=0)
Jan 26 10:07:52 compute-0 ovn_controller[155832]: 2026-01-26T10:07:52Z|00034|binding|INFO|Setting lport 9e43222f-ece8-42ba-968c-6ed6feedb649 down in Southbound
Jan 26 10:07:52 compute-0 ovn_controller[155832]: 2026-01-26T10:07:52Z|00035|binding|INFO|Removing iface tap9e43222f-ec ovn-installed in OVS
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.861 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:52 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:52.869 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:20:58 10.100.0.22'], port_security=['fa:16:3e:77:20:58 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': '5ac85101-7f84-4ad6-b66a-95cd2fdfcd14', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9437f473-f5d5-4abb-a3a1-691a33bf3b29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30cd5c01-d547-4b4e-a8ed-aeb208f30737, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=9e43222f-ece8-42ba-968c-6ed6feedb649) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:07:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240036e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:52 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:52.871 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 9e43222f-ece8-42ba-968c-6ed6feedb649 in datapath 82a5dc98-3279-47e7-b5f8-a111d4ea33ff unbound from our chassis
Jan 26 10:07:52 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:52.872 166625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82a5dc98-3279-47e7-b5f8-a111d4ea33ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 10:07:52 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:52.873 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[39f53c86-3cf8-4593-b4aa-8bd7c6e59bbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:52 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:52.874 166625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff namespace which is not needed anymore
Jan 26 10:07:52 compute-0 nova_compute[254880]: 2026-01-26 10:07:52.880 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:52 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 26 10:07:52 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 14.718s CPU time.
Jan 26 10:07:52 compute-0 systemd-machined[221254]: Machine qemu-1-instance-00000002 terminated.
Jan 26 10:07:52 compute-0 podman[262745]: 2026-01-26 10:07:52.966581947 +0000 UTC m=+0.092176160 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.050 254884 INFO nova.virt.libvirt.driver [-] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Instance destroyed successfully.
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.051 254884 DEBUG nova.objects.instance [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'resources' on Instance uuid 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.067 254884 DEBUG nova.virt.libvirt.vif [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:07:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1236230435',display_name='tempest-TestNetworkBasicOps-server-1236230435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1236230435',id=2,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFTOs3BXjnN5+km+pnLH2Ek/lorLOv1RvQSyPovSAMkr1PMNI58K7B5CMpbJHI4DHjOvYyHXNzgdFUGarrhqe58ezYN8ulK/lRs2EXeW8gH8d4vZ1Z0yG61vGiMDueIFbg==',key_name='tempest-TestNetworkBasicOps-1126988904',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:07:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-q0cadtx1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:07:27Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=5ac85101-7f84-4ad6-b66a-95cd2fdfcd14,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.067 254884 DEBUG nova.network.os_vif_util [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "9e43222f-ece8-42ba-968c-6ed6feedb649", "address": "fa:16:3e:77:20:58", "network": {"id": "82a5dc98-3279-47e7-b5f8-a111d4ea33ff", "bridge": "br-int", "label": "tempest-network-smoke--814873933", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e43222f-ec", "ovs_interfaceid": "9e43222f-ece8-42ba-968c-6ed6feedb649", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.068 254884 DEBUG nova.network.os_vif_util [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:20:58,bridge_name='br-int',has_traffic_filtering=True,id=9e43222f-ece8-42ba-968c-6ed6feedb649,network=Network(82a5dc98-3279-47e7-b5f8-a111d4ea33ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e43222f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.068 254884 DEBUG os_vif [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:20:58,bridge_name='br-int',has_traffic_filtering=True,id=9e43222f-ece8-42ba-968c-6ed6feedb649,network=Network(82a5dc98-3279-47e7-b5f8-a111d4ea33ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e43222f-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.070 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.070 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e43222f-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.072 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.074 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.076 254884 INFO os_vif [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:20:58,bridge_name='br-int',has_traffic_filtering=True,id=9e43222f-ece8-42ba-968c-6ed6feedb649,network=Network(82a5dc98-3279-47e7-b5f8-a111d4ea33ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e43222f-ec')
Jan 26 10:07:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.146 254884 DEBUG nova.compute.manager [req-0a550a33-b3c8-49a4-96cc-e8c759f6c7ff req-a3ac90ba-6bb6-4087-bec5-8537f2e619b8 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received event network-vif-unplugged-9e43222f-ece8-42ba-968c-6ed6feedb649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.148 254884 DEBUG oslo_concurrency.lockutils [req-0a550a33-b3c8-49a4-96cc-e8c759f6c7ff req-a3ac90ba-6bb6-4087-bec5-8537f2e619b8 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.149 254884 DEBUG oslo_concurrency.lockutils [req-0a550a33-b3c8-49a4-96cc-e8c759f6c7ff req-a3ac90ba-6bb6-4087-bec5-8537f2e619b8 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.149 254884 DEBUG oslo_concurrency.lockutils [req-0a550a33-b3c8-49a4-96cc-e8c759f6c7ff req-a3ac90ba-6bb6-4087-bec5-8537f2e619b8 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.150 254884 DEBUG nova.compute.manager [req-0a550a33-b3c8-49a4-96cc-e8c759f6c7ff req-a3ac90ba-6bb6-4087-bec5-8537f2e619b8 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] No waiting events found dispatching network-vif-unplugged-9e43222f-ece8-42ba-968c-6ed6feedb649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.151 254884 DEBUG nova.compute.manager [req-0a550a33-b3c8-49a4-96cc-e8c759f6c7ff req-a3ac90ba-6bb6-4087-bec5-8537f2e619b8 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received event network-vif-unplugged-9e43222f-ece8-42ba-968c-6ed6feedb649 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 10:07:53 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [NOTICE]   (261761) : haproxy version is 2.8.14-c23fe91
Jan 26 10:07:53 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [NOTICE]   (261761) : path to executable is /usr/sbin/haproxy
Jan 26 10:07:53 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [WARNING]  (261761) : Exiting Master process...
Jan 26 10:07:53 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [WARNING]  (261761) : Exiting Master process...
Jan 26 10:07:53 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [ALERT]    (261761) : Current worker (261763) exited with code 143 (Terminated)
Jan 26 10:07:53 compute-0 neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff[261743]: [WARNING]  (261761) : All workers exited. Exiting... (0)
Jan 26 10:07:53 compute-0 systemd[1]: libpod-942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060.scope: Deactivated successfully.
Jan 26 10:07:53 compute-0 podman[262793]: 2026-01-26 10:07:53.165163672 +0000 UTC m=+0.196480445 container died 942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 10:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc29809c242388021b7b334ce027d237638e57937fb07c975db2b13b8498d9b8-merged.mount: Deactivated successfully.
Jan 26 10:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060-userdata-shm.mount: Deactivated successfully.
Jan 26 10:07:53 compute-0 podman[262793]: 2026-01-26 10:07:53.400576562 +0000 UTC m=+0.431893335 container cleanup 942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:07:53 compute-0 systemd[1]: libpod-conmon-942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060.scope: Deactivated successfully.
Jan 26 10:07:53 compute-0 podman[262853]: 2026-01-26 10:07:53.524881093 +0000 UTC m=+0.099545370 container remove 942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 10:07:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 795 KiB/s wr, 48 op/s
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.531 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[26380bfa-78c2-4b66-82ba-b41c8855cd53]: (4, ('Mon Jan 26 10:07:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff (942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060)\n942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060\nMon Jan 26 10:07:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff (942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060)\n942495f7d3152da0e86864d4b36cd7fba4389966c4893b685ab9105191ee6060\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.532 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[721a8944-7936-4c2a-8dff-660c7fb1b323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.533 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82a5dc98-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.571 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:53 compute-0 kernel: tap82a5dc98-30: left promiscuous mode
Jan 26 10:07:53 compute-0 nova_compute[254880]: 2026-01-26 10:07:53.586 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.589 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[87166639-d4c4-438b-a56a-e199e8b109f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.604 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[33fef7da-f947-4dc9-be7c-f497dc291e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.605 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[86ab801f-7c65-4996-b021-a3df2a218bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.625 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[14f35d00-b21f-4202-850e-9fb259223235]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400316, 'reachable_time': 15835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262868, 'error': None, 'target': 'ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.635 167020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82a5dc98-3279-47e7-b5f8-a111d4ea33ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 10:07:53 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:53.636 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f58a5c-9165-4077-9943-3e439ce48f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:07:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d82a5dc98\x2d3279\x2d47e7\x2db5f8\x2da111d4ea33ff.mount: Deactivated successfully.
Jan 26 10:07:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:53.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:54 compute-0 nova_compute[254880]: 2026-01-26 10:07:54.115 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:54 compute-0 nova_compute[254880]: 2026-01-26 10:07:54.122 254884 INFO nova.virt.libvirt.driver [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Deleting instance files /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_del
Jan 26 10:07:54 compute-0 nova_compute[254880]: 2026-01-26 10:07:54.123 254884 INFO nova.virt.libvirt.driver [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Deletion of /var/lib/nova/instances/5ac85101-7f84-4ad6-b66a-95cd2fdfcd14_del complete
Jan 26 10:07:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:54.692 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:54.692 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:54.693 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:07:54.904 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:07:54 compute-0 ceph-mon[74456]: pgmap v770: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 795 KiB/s wr, 48 op/s
Jan 26 10:07:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 797 KiB/s wr, 76 op/s
Jan 26 10:07:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:55.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:56 compute-0 ceph-mon[74456]: pgmap v771: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 797 KiB/s wr, 76 op/s
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.256 254884 DEBUG nova.compute.manager [req-46070a57-0b9e-454c-90d5-736dc6093270 req-6e063e77-d1c8-4304-bc1f-fc3fd883fc3c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received event network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.257 254884 DEBUG oslo_concurrency.lockutils [req-46070a57-0b9e-454c-90d5-736dc6093270 req-6e063e77-d1c8-4304-bc1f-fc3fd883fc3c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.257 254884 DEBUG oslo_concurrency.lockutils [req-46070a57-0b9e-454c-90d5-736dc6093270 req-6e063e77-d1c8-4304-bc1f-fc3fd883fc3c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.257 254884 DEBUG oslo_concurrency.lockutils [req-46070a57-0b9e-454c-90d5-736dc6093270 req-6e063e77-d1c8-4304-bc1f-fc3fd883fc3c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.257 254884 DEBUG nova.compute.manager [req-46070a57-0b9e-454c-90d5-736dc6093270 req-6e063e77-d1c8-4304-bc1f-fc3fd883fc3c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] No waiting events found dispatching network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.258 254884 WARNING nova.compute.manager [req-46070a57-0b9e-454c-90d5-736dc6093270 req-6e063e77-d1c8-4304-bc1f-fc3fd883fc3c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received unexpected event network-vif-plugged-9e43222f-ece8-42ba-968c-6ed6feedb649 for instance with vm_state active and task_state deleting.
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.263 254884 DEBUG nova.virt.libvirt.host [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.264 254884 INFO nova.virt.libvirt.host [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] UEFI support detected
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.266 254884 INFO nova.compute.manager [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Took 3.46 seconds to destroy the instance on the hypervisor.
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.267 254884 DEBUG oslo.service.loopingcall [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.267 254884 DEBUG nova.compute.manager [-] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 10:07:56 compute-0 nova_compute[254880]: 2026-01-26 10:07:56.267 254884 DEBUG nova.network.neutron [-] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 10:07:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:56] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Jan 26 10:07:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:07:56] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Jan 26 10:07:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.069 254884 DEBUG nova.network.neutron [-] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.086 254884 INFO nova.compute.manager [-] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Took 0.82 seconds to deallocate network for instance.
Jan 26 10:07:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:57.118Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:07:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:07:57.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.136 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.136 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.191 254884 DEBUG oslo_concurrency.processutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:07:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 30 op/s
Jan 26 10:07:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:07:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4177848907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.633 254884 DEBUG oslo_concurrency.processutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.641 254884 DEBUG nova.compute.provider_tree [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.668 254884 DEBUG nova.scheduler.client.report [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:07:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4177848907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.696 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.718 254884 INFO nova.scheduler.client.report [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Deleted allocations for instance 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14
Jan 26 10:07:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:57 compute-0 nova_compute[254880]: 2026-01-26 10:07:57.800 254884 DEBUG oslo_concurrency.lockutils [None req-d5d7b2e2-1534-4715-b8fc-5768c0d5f7d1 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "5ac85101-7f84-4ad6-b66a-95cd2fdfcd14" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:07:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:57.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:58 compute-0 nova_compute[254880]: 2026-01-26 10:07:58.072 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:07:58 compute-0 nova_compute[254880]: 2026-01-26 10:07:58.380 254884 DEBUG nova.compute.manager [req-b39e8d1d-1424-4db6-9636-36056c8d2292 req-0e91470f-f5bf-43cd-a4b6-d470b1319fed b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Received event network-vif-deleted-9e43222f-ece8-42ba-968c-6ed6feedb649 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:07:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:07:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:58 compute-0 ceph-mon[74456]: pgmap v772: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 30 op/s
Jan 26 10:07:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/834116444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:07:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/834116444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:07:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/100758 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:07:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:59 compute-0 nova_compute[254880]: 2026-01-26 10:07:59.116 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:07:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 30 op/s
Jan 26 10:07:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:07:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:07:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:07:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:07:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:07:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:07:59 compute-0 nova_compute[254880]: 2026-01-26 10:07:59.991 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:00 compute-0 nova_compute[254880]: 2026-01-26 10:08:00.094 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:00.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:00 compute-0 ceph-mon[74456]: pgmap v773: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 30 op/s
Jan 26 10:08:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 26 KiB/s wr, 31 op/s
Jan 26 10:08:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:02 compute-0 sudo[262904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:08:02 compute-0 sudo[262904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:02 compute-0 sudo[262904]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:02 compute-0 ceph-mon[74456]: pgmap v774: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 26 KiB/s wr, 31 op/s
Jan 26 10:08:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:03 compute-0 nova_compute[254880]: 2026-01-26 10:08:03.075 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Jan 26 10:08:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:08:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:04 compute-0 nova_compute[254880]: 2026-01-26 10:08:04.119 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:04 compute-0 ceph-mon[74456]: pgmap v775: 353 pgs: 353 active+clean; 121 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Jan 26 10:08:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3400737954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Jan 26 10:08:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:05.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:06 compute-0 ceph-mon[74456]: pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Jan 26 10:08:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:06.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:06] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:08:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:06] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:08:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:07.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:08:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:08:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Jan 26 10:08:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:07.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:08 compute-0 nova_compute[254880]: 2026-01-26 10:08:08.049 254884 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769422073.0469694, 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:08:08 compute-0 nova_compute[254880]: 2026-01-26 10:08:08.049 254884 INFO nova.compute.manager [-] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] VM Stopped (Lifecycle Event)
Jan 26 10:08:08 compute-0 nova_compute[254880]: 2026-01-26 10:08:08.077 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:08 compute-0 nova_compute[254880]: 2026-01-26 10:08:08.159 254884 DEBUG nova.compute.manager [None req-fa4f85c1-700a-4512-b68a-27c5e74593bb - - - - - -] [instance: 5ac85101-7f84-4ad6-b66a-95cd2fdfcd14] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:08:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:08.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:08 compute-0 ceph-mon[74456]: pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Jan 26 10:08:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:09 compute-0 nova_compute[254880]: 2026-01-26 10:08:09.121 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:09 compute-0 podman[262938]: 2026-01-26 10:08:09.124950248 +0000 UTC m=+0.055701733 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 10:08:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Jan 26 10:08:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:09.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:10.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:08:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:08:10 compute-0 sshd-session[262957]: Invalid user postgres from 157.245.76.178 port 55650
Jan 26 10:08:10 compute-0 sshd-session[262957]: Connection closed by invalid user postgres 157.245.76.178 port 55650 [preauth]
Jan 26 10:08:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:10 compute-0 ceph-mon[74456]: pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Jan 26 10:08:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 11 KiB/s wr, 30 op/s
Jan 26 10:08:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:11.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:08:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:12.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:08:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:12 compute-0 ceph-mon[74456]: pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 11 KiB/s wr, 30 op/s
Jan 26 10:08:13 compute-0 nova_compute[254880]: 2026-01-26 10:08:13.079 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 26 10:08:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:13.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:13 compute-0 ceph-mon[74456]: pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 26 10:08:14 compute-0 nova_compute[254880]: 2026-01-26 10:08:14.152 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:14.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 26 10:08:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:16.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:16 compute-0 ceph-mon[74456]: pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 26 10:08:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:16] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:08:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:16] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:08:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:17.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:08:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:17.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:08:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 26 10:08:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:08:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:08:18 compute-0 nova_compute[254880]: 2026-01-26 10:08:18.081 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:18.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:08:18
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', '.mgr', 'images', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:08:18 compute-0 ceph-mon[74456]: pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 26 10:08:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:08:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:08:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:08:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb140014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:08:19 compute-0 nova_compute[254880]: 2026-01-26 10:08:19.154 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 26 10:08:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:19.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:08:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:20.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:08:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:08:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:20 compute-0 ceph-mon[74456]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Jan 26 10:08:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 26 10:08:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:08:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:08:21 compute-0 ceph-mon[74456]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 26 10:08:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:22.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:22 compute-0 sudo[262973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:08:22 compute-0 sudo[262973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:22 compute-0 sudo[262973]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:23 compute-0 nova_compute[254880]: 2026-01-26 10:08:23.083 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:23 compute-0 podman[262998]: 2026-01-26 10:08:23.14383403 +0000 UTC m=+0.078301124 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 10:08:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 26 10:08:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:23.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:24 compute-0 nova_compute[254880]: 2026-01-26 10:08:24.185 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:24.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:24 compute-0 ceph-mon[74456]: pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Jan 26 10:08:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:08:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:25.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:26.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:26] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Jan 26 10:08:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:26] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Jan 26 10:08:26 compute-0 ceph-mon[74456]: pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:08:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:27.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:08:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:08:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 5660 writes, 25K keys, 5659 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 5660 writes, 5659 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1530 writes, 6764 keys, 1529 commit groups, 1.0 writes per commit group, ingest: 11.20 MB, 0.02 MB/s
                                           Interval WAL: 1530 writes, 1529 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     72.6      0.55              0.11        14    0.039       0      0       0.0       0.0
                                             L6      1/0   12.05 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.1    155.9    133.3      1.24              0.39        13    0.095     67K   6938       0.0       0.0
                                            Sum      1/0   12.05 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   5.1    107.9    114.6      1.79              0.50        27    0.066     67K   6938       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7    109.2    110.4      0.79              0.24        12    0.066     34K   3101       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    155.9    133.3      1.24              0.39        13    0.095     67K   6938       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    136.8      0.29              0.11        13    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.039, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 1.8 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a9cd69b350#2 capacity: 304.00 MB usage: 14.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000155 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(799,14.00 MB,4.60593%) FilterBlock(28,198.92 KB,0.0639012%) IndexBlock(28,357.19 KB,0.114742%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 10:08:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:08:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1542247195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:08:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:08:28 compute-0 nova_compute[254880]: 2026-01-26 10:08:28.085 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:28 compute-0 ceph-mon[74456]: pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:08:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:29 compute-0 nova_compute[254880]: 2026-01-26 10:08:29.187 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:08:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:29 compute-0 nova_compute[254880]: 2026-01-26 10:08:29.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:29 compute-0 nova_compute[254880]: 2026-01-26 10:08:29.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:30.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:08:30 compute-0 ceph-mon[74456]: pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:08:30 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2323226978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:30 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3734299171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:30 compute-0 nova_compute[254880]: 2026-01-26 10:08:30.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:30 compute-0 nova_compute[254880]: 2026-01-26 10:08:30.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:08:30 compute-0 nova_compute[254880]: 2026-01-26 10:08:30.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:08:30 compute-0 nova_compute[254880]: 2026-01-26 10:08:30.987 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:08:30 compute-0 nova_compute[254880]: 2026-01-26 10:08:30.987 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:08:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2213818827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1950693756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:31.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.991 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.992 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.992 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.992 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:08:31 compute-0 nova_compute[254880]: 2026-01-26 10:08:31.993 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:08:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:08:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279715829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:32.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.447 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.605 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.607 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4617MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.607 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.607 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.702 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.703 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:08:32 compute-0 nova_compute[254880]: 2026-01-26 10:08:32.720 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:08:32 compute-0 ceph-mon[74456]: pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:08:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2279715829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:33 compute-0 nova_compute[254880]: 2026-01-26 10:08:33.088 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:08:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2515291421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:33 compute-0 nova_compute[254880]: 2026-01-26 10:08:33.225 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:08:33 compute-0 nova_compute[254880]: 2026-01-26 10:08:33.231 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:08:33 compute-0 nova_compute[254880]: 2026-01-26 10:08:33.246 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:08:33 compute-0 nova_compute[254880]: 2026-01-26 10:08:33.265 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:08:33 compute-0 nova_compute[254880]: 2026-01-26 10:08:33.265 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:08:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 26 10:08:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:08:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb3800a0c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2515291421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:08:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:33.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:34 compute-0 nova_compute[254880]: 2026-01-26 10:08:34.224 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:34.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb0c003150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:34 compute-0 ceph-mon[74456]: pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 26 10:08:35 compute-0 nova_compute[254880]: 2026-01-26 10:08:35.265 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:35 compute-0 nova_compute[254880]: 2026-01-26 10:08:35.265 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:08:35 compute-0 nova_compute[254880]: 2026-01-26 10:08:35.266 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:08:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 10:08:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:08:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:35.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:08:35 compute-0 ceph-mon[74456]: pgmap v791: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 10:08:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:08:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:36.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:08:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:36] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Jan 26 10:08:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:36] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Jan 26 10:08:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1755361075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:08:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2214757130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:08:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:37.122Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:08:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:37.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:08:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 10:08:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:37.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:38 compute-0 ceph-mon[74456]: pgmap v792: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 10:08:38 compute-0 nova_compute[254880]: 2026-01-26 10:08:38.089 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:39 compute-0 nova_compute[254880]: 2026-01-26 10:08:39.227 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 10:08:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:39.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:40 compute-0 podman[263088]: 2026-01-26 10:08:40.117573604 +0000 UTC m=+0.053099741 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 10:08:40 compute-0 sudo[263108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:08:40 compute-0 sudo[263108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:40 compute-0 sudo[263108]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:40 compute-0 sudo[263133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:08:40 compute-0 sudo[263133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: pgmap v793: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 10:08:40 compute-0 sudo[263133]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:08:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:08:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:08:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:08:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:08:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:08:40 compute-0 sudo[263190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:08:40 compute-0 sudo[263190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:40 compute-0 sudo[263190]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:41 compute-0 sudo[263215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:08:41 compute-0 sudo[263215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:41 compute-0 podman[263281]: 2026-01-26 10:08:41.395535765 +0000 UTC m=+0.039381699 container create 9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:08:41 compute-0 systemd[1]: Started libpod-conmon-9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c.scope.
Jan 26 10:08:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:08:41 compute-0 podman[263281]: 2026-01-26 10:08:41.377313043 +0000 UTC m=+0.021159007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:08:41 compute-0 podman[263281]: 2026-01-26 10:08:41.477523217 +0000 UTC m=+0.121369181 container init 9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 10:08:41 compute-0 podman[263281]: 2026-01-26 10:08:41.486973917 +0000 UTC m=+0.130819851 container start 9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:08:41 compute-0 podman[263281]: 2026-01-26 10:08:41.490325288 +0000 UTC m=+0.134171232 container attach 9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 10:08:41 compute-0 laughing_volhard[263297]: 167 167
Jan 26 10:08:41 compute-0 systemd[1]: libpod-9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c.scope: Deactivated successfully.
Jan 26 10:08:41 compute-0 podman[263281]: 2026-01-26 10:08:41.493382322 +0000 UTC m=+0.137228256 container died 9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:08:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-07f95b7b5f2ec1e60348329c227e53cca96b94594a8544ebf96d753dbc8a9f54-merged.mount: Deactivated successfully.
Jan 26 10:08:41 compute-0 podman[263281]: 2026-01-26 10:08:41.533768183 +0000 UTC m=+0.177614117 container remove 9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 26 10:08:41 compute-0 systemd[1]: libpod-conmon-9b50a8dac133206983cacd1fc277ac90ed8442154c9faaaa3a19f62a0a7af94c.scope: Deactivated successfully.
Jan 26 10:08:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 26 10:08:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:08:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:08:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:08:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:08:41 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:08:41 compute-0 podman[263321]: 2026-01-26 10:08:41.685553102 +0000 UTC m=+0.040746471 container create a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_euclid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 10:08:41 compute-0 systemd[1]: Started libpod-conmon-a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42.scope.
Jan 26 10:08:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5955f122e3c1f0db63e214e5fcb6a327329497b777fa0b57369f690b44146d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5955f122e3c1f0db63e214e5fcb6a327329497b777fa0b57369f690b44146d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5955f122e3c1f0db63e214e5fcb6a327329497b777fa0b57369f690b44146d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5955f122e3c1f0db63e214e5fcb6a327329497b777fa0b57369f690b44146d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5955f122e3c1f0db63e214e5fcb6a327329497b777fa0b57369f690b44146d6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:41 compute-0 podman[263321]: 2026-01-26 10:08:41.666961769 +0000 UTC m=+0.022155158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:08:41 compute-0 podman[263321]: 2026-01-26 10:08:41.778203003 +0000 UTC m=+0.133396392 container init a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_euclid, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:08:41 compute-0 podman[263321]: 2026-01-26 10:08:41.784816984 +0000 UTC m=+0.140010353 container start a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_euclid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:08:41 compute-0 podman[263321]: 2026-01-26 10:08:41.795746498 +0000 UTC m=+0.150939877 container attach a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 10:08:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:41.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:42 compute-0 unruffled_euclid[263337]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:08:42 compute-0 unruffled_euclid[263337]: --> All data devices are unavailable
Jan 26 10:08:42 compute-0 systemd[1]: libpod-a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42.scope: Deactivated successfully.
Jan 26 10:08:42 compute-0 podman[263321]: 2026-01-26 10:08:42.10489399 +0000 UTC m=+0.460087369 container died a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5955f122e3c1f0db63e214e5fcb6a327329497b777fa0b57369f690b44146d6f-merged.mount: Deactivated successfully.
Jan 26 10:08:42 compute-0 podman[263321]: 2026-01-26 10:08:42.152382814 +0000 UTC m=+0.507576183 container remove a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 26 10:08:42 compute-0 systemd[1]: libpod-conmon-a4eb190b2a5e9da596f94baf652f57e705f6f8675cb99e209095cd3bc3bedf42.scope: Deactivated successfully.
Jan 26 10:08:42 compute-0 sudo[263215]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:42 compute-0 sudo[263364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:08:42 compute-0 sudo[263364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:42 compute-0 sudo[263364]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:42 compute-0 sudo[263389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:08:42 compute-0 sudo[263389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:42.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:42 compute-0 podman[263457]: 2026-01-26 10:08:42.72511622 +0000 UTC m=+0.038007435 container create cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:08:42 compute-0 systemd[1]: Started libpod-conmon-cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df.scope.
Jan 26 10:08:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:08:42 compute-0 podman[263457]: 2026-01-26 10:08:42.793529862 +0000 UTC m=+0.106421117 container init cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:08:42 compute-0 podman[263457]: 2026-01-26 10:08:42.799791344 +0000 UTC m=+0.112682559 container start cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_wescoff, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:08:42 compute-0 podman[263457]: 2026-01-26 10:08:42.803692899 +0000 UTC m=+0.116584134 container attach cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:08:42 compute-0 laughing_wescoff[263474]: 167 167
Jan 26 10:08:42 compute-0 podman[263457]: 2026-01-26 10:08:42.710628038 +0000 UTC m=+0.023519273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:08:42 compute-0 systemd[1]: libpod-cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df.scope: Deactivated successfully.
Jan 26 10:08:42 compute-0 podman[263457]: 2026-01-26 10:08:42.807510522 +0000 UTC m=+0.120401777 container died cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 10:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-be5af7008d3835d2c3b559c6be6efd4ceb5ab3d1a47138cdee371752284c30a7-merged.mount: Deactivated successfully.
Jan 26 10:08:42 compute-0 sudo[263477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:08:42 compute-0 sudo[263477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:42 compute-0 sudo[263477]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:42 compute-0 podman[263457]: 2026-01-26 10:08:42.847464803 +0000 UTC m=+0.160356028 container remove cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:08:42 compute-0 systemd[1]: libpod-conmon-cbeda9847e3ddd66c9d11e939127d9fa8d7b6c7f7851aa6699d85bd5f7b420df.scope: Deactivated successfully.
Jan 26 10:08:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:42 compute-0 ceph-mon[74456]: pgmap v794: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 26 10:08:43 compute-0 podman[263521]: 2026-01-26 10:08:43.027477216 +0000 UTC m=+0.052608738 container create 35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:08:43 compute-0 systemd[1]: Started libpod-conmon-35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031.scope.
Jan 26 10:08:43 compute-0 nova_compute[254880]: 2026-01-26 10:08:43.090 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:08:43 compute-0 podman[263521]: 2026-01-26 10:08:43.004244001 +0000 UTC m=+0.029375553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d3210d5c8a1c83d00d8b5df1ae9ce43f4d4290e5935f63bd4300e8433fe9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d3210d5c8a1c83d00d8b5df1ae9ce43f4d4290e5935f63bd4300e8433fe9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d3210d5c8a1c83d00d8b5df1ae9ce43f4d4290e5935f63bd4300e8433fe9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e6d3210d5c8a1c83d00d8b5df1ae9ce43f4d4290e5935f63bd4300e8433fe9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:43 compute-0 podman[263521]: 2026-01-26 10:08:43.113911027 +0000 UTC m=+0.139042579 container init 35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_leavitt, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:08:43 compute-0 podman[263521]: 2026-01-26 10:08:43.120925327 +0000 UTC m=+0.146056849 container start 35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:08:43 compute-0 podman[263521]: 2026-01-26 10:08:43.124650557 +0000 UTC m=+0.149782079 container attach 35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_leavitt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:08:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]: {
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:     "0": [
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:         {
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "devices": [
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "/dev/loop3"
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             ],
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "lv_name": "ceph_lv0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "lv_size": "21470642176",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "name": "ceph_lv0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "tags": {
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.cluster_name": "ceph",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.crush_device_class": "",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.encrypted": "0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.osd_id": "0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.type": "block",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.vdo": "0",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:                 "ceph.with_tpm": "0"
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             },
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "type": "block",
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:             "vg_name": "ceph_vg0"
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:         }
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]:     ]
Jan 26 10:08:43 compute-0 sweet_leavitt[263537]: }
Jan 26 10:08:43 compute-0 systemd[1]: libpod-35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031.scope: Deactivated successfully.
Jan 26 10:08:43 compute-0 podman[263521]: 2026-01-26 10:08:43.39888494 +0000 UTC m=+0.424016462 container died 35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_leavitt, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 10:08:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e6d3210d5c8a1c83d00d8b5df1ae9ce43f4d4290e5935f63bd4300e8433fe9f-merged.mount: Deactivated successfully.
Jan 26 10:08:43 compute-0 podman[263521]: 2026-01-26 10:08:43.439830646 +0000 UTC m=+0.464962168 container remove 35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_leavitt, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 10:08:43 compute-0 systemd[1]: libpod-conmon-35ad4b3e090f1b051385ca91fa1e6a5a409cb805847230c39b58a035213ba031.scope: Deactivated successfully.
Jan 26 10:08:43 compute-0 sudo[263389]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:43 compute-0 sudo[263559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:08:43 compute-0 sudo[263559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:43 compute-0 sudo[263559]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 26 10:08:43 compute-0 sudo[263584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:08:43 compute-0 sudo[263584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:43 compute-0 ceph-mon[74456]: pgmap v795: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 26 10:08:43 compute-0 podman[263649]: 2026-01-26 10:08:43.981096096 +0000 UTC m=+0.037851490 container create 94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 10:08:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 10:08:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:43.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 10:08:44 compute-0 systemd[1]: Started libpod-conmon-94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4.scope.
Jan 26 10:08:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:08:44 compute-0 podman[263649]: 2026-01-26 10:08:43.965843306 +0000 UTC m=+0.022598720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:08:44 compute-0 podman[263649]: 2026-01-26 10:08:44.073911921 +0000 UTC m=+0.130667345 container init 94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:08:44 compute-0 podman[263649]: 2026-01-26 10:08:44.081853435 +0000 UTC m=+0.138608829 container start 94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:08:44 compute-0 podman[263649]: 2026-01-26 10:08:44.085215487 +0000 UTC m=+0.141970911 container attach 94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:08:44 compute-0 exciting_swirles[263666]: 167 167
Jan 26 10:08:44 compute-0 systemd[1]: libpod-94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4.scope: Deactivated successfully.
Jan 26 10:08:44 compute-0 podman[263649]: 2026-01-26 10:08:44.089028039 +0000 UTC m=+0.145783433 container died 94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-377de7f08af39cba7b570d8bd82a59ef3b2ca4f8aafad5e6ff853ad6cdd4697f-merged.mount: Deactivated successfully.
Jan 26 10:08:44 compute-0 podman[263649]: 2026-01-26 10:08:44.13761767 +0000 UTC m=+0.194373074 container remove 94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 10:08:44 compute-0 systemd[1]: libpod-conmon-94159dd381394b167917a01eefca161a1a163d90d649da7c4fd6127673c4ced4.scope: Deactivated successfully.
Jan 26 10:08:44 compute-0 nova_compute[254880]: 2026-01-26 10:08:44.230 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:44 compute-0 podman[263692]: 2026-01-26 10:08:44.320975185 +0000 UTC m=+0.048156691 container create f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 10:08:44 compute-0 ovn_controller[155832]: 2026-01-26T10:08:44Z|00036|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 26 10:08:44 compute-0 systemd[1]: Started libpod-conmon-f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447.scope.
Jan 26 10:08:44 compute-0 podman[263692]: 2026-01-26 10:08:44.298224992 +0000 UTC m=+0.025406498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:08:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6398041bec13d0358f948041002e7b632a03d02cdaa9d6cdb59af50f26fce5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6398041bec13d0358f948041002e7b632a03d02cdaa9d6cdb59af50f26fce5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6398041bec13d0358f948041002e7b632a03d02cdaa9d6cdb59af50f26fce5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6398041bec13d0358f948041002e7b632a03d02cdaa9d6cdb59af50f26fce5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:08:44 compute-0 podman[263692]: 2026-01-26 10:08:44.416902825 +0000 UTC m=+0.144084341 container init f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 10:08:44 compute-0 podman[263692]: 2026-01-26 10:08:44.423451834 +0000 UTC m=+0.150633320 container start f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mccarthy, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:08:44 compute-0 podman[263692]: 2026-01-26 10:08:44.427259167 +0000 UTC m=+0.154440673 container attach f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:08:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:45 compute-0 lvm[263784]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:08:45 compute-0 lvm[263784]: VG ceph_vg0 finished
Jan 26 10:08:45 compute-0 musing_mccarthy[263708]: {}
Jan 26 10:08:45 compute-0 systemd[1]: libpod-f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447.scope: Deactivated successfully.
Jan 26 10:08:45 compute-0 systemd[1]: libpod-f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447.scope: Consumed 1.172s CPU time.
Jan 26 10:08:45 compute-0 podman[263692]: 2026-01-26 10:08:45.171046869 +0000 UTC m=+0.898228365 container died f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:08:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6398041bec13d0358f948041002e7b632a03d02cdaa9d6cdb59af50f26fce5-merged.mount: Deactivated successfully.
Jan 26 10:08:45 compute-0 podman[263692]: 2026-01-26 10:08:45.221956956 +0000 UTC m=+0.949138442 container remove f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_mccarthy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:08:45 compute-0 systemd[1]: libpod-conmon-f35afef19ff98c8601caad49f3b6deda0d56bc183a232b016f317af9eab5b447.scope: Deactivated successfully.
Jan 26 10:08:45 compute-0 sudo[263584]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:08:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:08:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:45 compute-0 sudo[263802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:08:45 compute-0 sudo[263802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:08:45 compute-0 sudo[263802]: pam_unix(sudo:session): session closed for user root
Jan 26 10:08:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 26 10:08:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34001ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:45.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:08:46 compute-0 ceph-mon[74456]: pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 26 10:08:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:46] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Jan 26 10:08:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:46] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Jan 26 10:08:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:47.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:08:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:08:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34001ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:47.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:48 compute-0 nova_compute[254880]: 2026-01-26 10:08:48.094 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:48.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:48 compute-0 ceph-mon[74456]: pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:08:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:08:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:08:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:08:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:08:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:08:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:08:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:08:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34001ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:49 compute-0 nova_compute[254880]: 2026-01-26 10:08:49.232 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:08:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:08:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:50.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:08:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:50 compute-0 ceph-mon[74456]: pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:08:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34001ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:08:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:52.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:52.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:52 compute-0 ceph-mon[74456]: pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:08:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:53 compute-0 nova_compute[254880]: 2026-01-26 10:08:53.095 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:53 compute-0 sshd-session[263835]: Invalid user postgres from 157.245.76.178 port 34390
Jan 26 10:08:53 compute-0 podman[263837]: 2026-01-26 10:08:53.535095872 +0000 UTC m=+0.084473884 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 10:08:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Jan 26 10:08:53 compute-0 sshd-session[263835]: Connection closed by invalid user postgres 157.245.76.178 port 34390 [preauth]
Jan 26 10:08:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34001ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:08:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:54.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:08:54 compute-0 ceph-mon[74456]: pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Jan 26 10:08:54 compute-0 nova_compute[254880]: 2026-01-26 10:08:54.234 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:08:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:54.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:08:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:08:54.692 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:08:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:08:54.693 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:08:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:08:54.693 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:08:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 26 10:08:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:08:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:56.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:08:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:56.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:56] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Jan 26 10:08:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:08:56] "GET /metrics HTTP/1.1" 200 48468 "" "Prometheus/2.51.0"
Jan 26 10:08:56 compute-0 ceph-mon[74456]: pgmap v801: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 26 10:08:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:57.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:08:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:08:57.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:08:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:08:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:08:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:08:58.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:08:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:08:58 compute-0 nova_compute[254880]: 2026-01-26 10:08:58.154 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:08:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:08:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:08:58.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:08:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:59 compute-0 ceph-mon[74456]: pgmap v802: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:08:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1772955815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:08:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1772955815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:08:59 compute-0 nova_compute[254880]: 2026-01-26 10:08:59.265 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:08:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:08:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:08:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:08:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb140020b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:00.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:00 compute-0 ceph-mon[74456]: pgmap v803: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:09:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:09:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:00.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:09:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb100046b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:02.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:02 compute-0 ceph-mon[74456]: pgmap v804: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:09:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb140020d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:02 compute-0 sudo[263873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:09:02 compute-0 sudo[263873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:02 compute-0 sudo[263873]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:03 compute-0 nova_compute[254880]: 2026-01-26 10:09:03.155 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:09:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:09:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:03 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:09:03.822 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:09:03 compute-0 nova_compute[254880]: 2026-01-26 10:09:03.822 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:03 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:09:03.823 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:09:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:04 compute-0 nova_compute[254880]: 2026-01-26 10:09:04.266 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:04 compute-0 ceph-mon[74456]: pgmap v805: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:09:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:09:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb140020f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:06.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 26 10:09:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:06.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:06] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:09:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:06] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:09:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:06 compute-0 ceph-mon[74456]: pgmap v806: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:09:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:07.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:09:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:07.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:09:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Jan 26 10:09:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:08.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:08 compute-0 nova_compute[254880]: 2026-01-26 10:09:08.157 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:08.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:09 compute-0 ceph-mon[74456]: pgmap v807: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Jan 26 10:09:09 compute-0 nova_compute[254880]: 2026-01-26 10:09:09.304 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Jan 26 10:09:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:10 compute-0 ceph-mon[74456]: pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Jan 26 10:09:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:09:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:10.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:10 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:09:10.825 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:09:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:11 compute-0 podman[263907]: 2026-01-26 10:09:11.146262545 +0000 UTC m=+0.075272380 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 10:09:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:09:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:12.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:12 compute-0 ceph-mon[74456]: pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:09:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:13 compute-0 nova_compute[254880]: 2026-01-26 10:09:13.158 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Jan 26 10:09:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:14.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:14 compute-0 nova_compute[254880]: 2026-01-26 10:09:14.334 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:14.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:14 compute-0 ceph-mon[74456]: pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Jan 26 10:09:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Jan 26 10:09:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:16 compute-0 ceph-mon[74456]: pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Jan 26 10:09:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:16.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:16] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:09:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:16] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:09:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:17.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:09:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 26 10:09:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:18 compute-0 nova_compute[254880]: 2026-01-26 10:09:18.160 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:18.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:09:18
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'backups', 'vms', 'default.rgw.log', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta']
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:09:18 compute-0 ceph-mon[74456]: pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 26 10:09:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:09:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:09:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:09:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595274142085043 of space, bias 1.0, pg target 0.22785822426255128 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:09:19 compute-0 nova_compute[254880]: 2026-01-26 10:09:19.336 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 26 10:09:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:20.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:09:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:20.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:21 compute-0 ceph-mon[74456]: pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 26 10:09:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 7.0 KiB/s wr, 2 op/s
Jan 26 10:09:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:22.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:22 compute-0 ceph-mon[74456]: pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 7.0 KiB/s wr, 2 op/s
Jan 26 10:09:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:22.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:23 compute-0 sudo[263936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:09:23 compute-0 sudo[263936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:23 compute-0 sudo[263936]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:23 compute-0 nova_compute[254880]: 2026-01-26 10:09:23.162 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 5.0 KiB/s wr, 1 op/s
Jan 26 10:09:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:24 compute-0 podman[263961]: 2026-01-26 10:09:24.212169652 +0000 UTC m=+0.145586978 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:09:24 compute-0 nova_compute[254880]: 2026-01-26 10:09:24.338 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:24.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:24 compute-0 ceph-mon[74456]: pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 5.0 KiB/s wr, 1 op/s
Jan 26 10:09:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:25 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 10:09:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Jan 26 10:09:25 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1303242856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:26.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:26] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:09:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:26] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:09:26 compute-0 ceph-mon[74456]: pgmap v816: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Jan 26 10:09:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1429236293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38001f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:27.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:09:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 10:09:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:28 compute-0 nova_compute[254880]: 2026-01-26 10:09:28.164 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:28.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:28 compute-0 ceph-mon[74456]: pgmap v817: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 10:09:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:29 compute-0 nova_compute[254880]: 2026-01-26 10:09:29.340 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 10:09:29 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2035187687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380020d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:09:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:30.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:30 compute-0 ceph-mon[74456]: pgmap v818: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 10:09:30 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2407901453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:30 compute-0 nova_compute[254880]: 2026-01-26 10:09:30.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:30 compute-0 nova_compute[254880]: 2026-01-26 10:09:30.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:09:30 compute-0 nova_compute[254880]: 2026-01-26 10:09:30.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:09:30 compute-0 nova_compute[254880]: 2026-01-26 10:09:30.979 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:09:30 compute-0 nova_compute[254880]: 2026-01-26 10:09:30.979 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 10:09:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1839213161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.985 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.985 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.986 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.986 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:09:31 compute-0 nova_compute[254880]: 2026-01-26 10:09:31.986 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:09:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:09:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1818352728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.423 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:09:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:32.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.576 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.577 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4651MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.578 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.578 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.627 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.628 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:09:32 compute-0 nova_compute[254880]: 2026-01-26 10:09:32.641 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:09:32 compute-0 ceph-mon[74456]: pgmap v819: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 10:09:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/267005787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1818352728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:09:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260971169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:33 compute-0 nova_compute[254880]: 2026-01-26 10:09:33.135 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:09:33 compute-0 nova_compute[254880]: 2026-01-26 10:09:33.141 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:09:33 compute-0 nova_compute[254880]: 2026-01-26 10:09:33.156 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:09:33 compute-0 nova_compute[254880]: 2026-01-26 10:09:33.158 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:09:33 compute-0 nova_compute[254880]: 2026-01-26 10:09:33.159 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:09:33 compute-0 nova_compute[254880]: 2026-01-26 10:09:33.166 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:09:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:09:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2260971169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:34 compute-0 nova_compute[254880]: 2026-01-26 10:09:34.158 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:34 compute-0 nova_compute[254880]: 2026-01-26 10:09:34.159 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:34 compute-0 nova_compute[254880]: 2026-01-26 10:09:34.380 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb38012340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:34 compute-0 ceph-mon[74456]: pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:09:34 compute-0 nova_compute[254880]: 2026-01-26 10:09:34.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:34 compute-0 nova_compute[254880]: 2026-01-26 10:09:34.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:09:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:09:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:35 compute-0 nova_compute[254880]: 2026-01-26 10:09:35.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180045d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:36.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:36 compute-0 sshd-session[264044]: Invalid user postgres from 157.245.76.178 port 39634
Jan 26 10:09:36 compute-0 sshd-session[264044]: Connection closed by invalid user postgres 157.245.76.178 port 39634 [preauth]
Jan 26 10:09:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:09:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:36] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:09:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:37 compute-0 ceph-mon[74456]: pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:09:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:37.128Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:09:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:37.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:09:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:37 compute-0 nova_compute[254880]: 2026-01-26 10:09:37.953 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:09:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:38 compute-0 ceph-mon[74456]: pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:38 compute-0 nova_compute[254880]: 2026-01-26 10:09:38.167 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:39 compute-0 nova_compute[254880]: 2026-01-26 10:09:39.421 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:09:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:40.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:40 compute-0 ceph-mon[74456]: pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 26 10:09:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180046a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:42 compute-0 podman[264055]: 2026-01-26 10:09:42.121303092 +0000 UTC m=+0.052925071 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 10:09:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:42 compute-0 ceph-mon[74456]: pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 26 10:09:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:43 compute-0 sudo[264076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:09:43 compute-0 sudo[264076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:43 compute-0 sudo[264076]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:43 compute-0 nova_compute[254880]: 2026-01-26 10:09:43.168 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180046c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:44.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:44 compute-0 ceph-mon[74456]: pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:44 compute-0 nova_compute[254880]: 2026-01-26 10:09:44.455 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb240041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:09:45 compute-0 sudo[264103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:09:45 compute-0 sudo[264103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:45 compute-0 sudo[264103]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:45 compute-0 sudo[264128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 26 10:09:45 compute-0 sudo[264128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:45 compute-0 sudo[264128]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:09:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:46 compute-0 sudo[264174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:09:46 compute-0 sudo[264174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:46 compute-0 sudo[264174]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:46.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:46 compute-0 sudo[264199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:09:46 compute-0 sudo[264199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:46 compute-0 sudo[264199]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:09:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:09:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:09:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:46] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:09:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:46] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:09:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:09:46 compute-0 ceph-mon[74456]: pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:09:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:09:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:09:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:09:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:09:46 compute-0 sudo[264257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:09:46 compute-0 sudo[264257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:46 compute-0 sudo[264257]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:46 compute-0 sudo[264282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:09:46 compute-0 sudo[264282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180046e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:47.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:09:47 compute-0 podman[264347]: 2026-01-26 10:09:47.176745447 +0000 UTC m=+0.037435943 container create 8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_wescoff, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 10:09:47 compute-0 systemd[1]: Started libpod-conmon-8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218.scope.
Jan 26 10:09:47 compute-0 podman[264347]: 2026-01-26 10:09:47.161686353 +0000 UTC m=+0.022376859 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:09:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:09:47 compute-0 podman[264347]: 2026-01-26 10:09:47.348021279 +0000 UTC m=+0.208711785 container init 8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:09:47 compute-0 podman[264347]: 2026-01-26 10:09:47.354898341 +0000 UTC m=+0.215588827 container start 8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 10:09:47 compute-0 unruffled_wescoff[264363]: 167 167
Jan 26 10:09:47 compute-0 systemd[1]: libpod-8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218.scope: Deactivated successfully.
Jan 26 10:09:47 compute-0 podman[264347]: 2026-01-26 10:09:47.379859884 +0000 UTC m=+0.240550380 container attach 8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_wescoff, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 10:09:47 compute-0 podman[264347]: 2026-01-26 10:09:47.380710738 +0000 UTC m=+0.241401224 container died 8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-86e1fa0bdd420366e014f06d8c60ce249fcc98c787e8f1304d238596fcc212cc-merged.mount: Deactivated successfully.
Jan 26 10:09:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:47 compute-0 podman[264347]: 2026-01-26 10:09:47.639644075 +0000 UTC m=+0.500334561 container remove 8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 10:09:47 compute-0 systemd[1]: libpod-conmon-8cb6e74f52f26f6eb7ea712c9ddc794dbf01b9adb8188065f7ebb3febb658218.scope: Deactivated successfully.
Jan 26 10:09:47 compute-0 podman[264388]: 2026-01-26 10:09:47.800464062 +0000 UTC m=+0.043439254 container create 5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 10:09:47 compute-0 podman[264388]: 2026-01-26 10:09:47.781576619 +0000 UTC m=+0.024551831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:09:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180046e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:09:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:09:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:09:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:48 compute-0 systemd[1]: Started libpod-conmon-5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b.scope.
Jan 26 10:09:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0b5efc1f4a75c5a4d4144cab07ad7b5c194bafd992bcc6decff0475e1bb18f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:09:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0b5efc1f4a75c5a4d4144cab07ad7b5c194bafd992bcc6decff0475e1bb18f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0b5efc1f4a75c5a4d4144cab07ad7b5c194bafd992bcc6decff0475e1bb18f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0b5efc1f4a75c5a4d4144cab07ad7b5c194bafd992bcc6decff0475e1bb18f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b0b5efc1f4a75c5a4d4144cab07ad7b5c194bafd992bcc6decff0475e1bb18f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:48 compute-0 podman[264388]: 2026-01-26 10:09:48.134160462 +0000 UTC m=+0.377135674 container init 5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_black, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 10:09:48 compute-0 podman[264388]: 2026-01-26 10:09:48.142871677 +0000 UTC m=+0.385846869 container start 5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_black, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 10:09:48 compute-0 podman[264388]: 2026-01-26 10:09:48.157480759 +0000 UTC m=+0.400455971 container attach 5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_black, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:09:48 compute-0 nova_compute[254880]: 2026-01-26 10:09:48.169 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:48 compute-0 peaceful_black[264403]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:09:48 compute-0 peaceful_black[264403]: --> All data devices are unavailable
Jan 26 10:09:48 compute-0 systemd[1]: libpod-5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b.scope: Deactivated successfully.
Jan 26 10:09:48 compute-0 podman[264388]: 2026-01-26 10:09:48.456610427 +0000 UTC m=+0.699585619 container died 5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:09:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0b5efc1f4a75c5a4d4144cab07ad7b5c194bafd992bcc6decff0475e1bb18f-merged.mount: Deactivated successfully.
Jan 26 10:09:48 compute-0 podman[264388]: 2026-01-26 10:09:48.518957542 +0000 UTC m=+0.761932734 container remove 5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 10:09:48 compute-0 systemd[1]: libpod-conmon-5894ee2a5eac0f31905c3f4b298c2b125806ae4567405d9109d8a355baca882b.scope: Deactivated successfully.
Jan 26 10:09:48 compute-0 sudo[264282]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:48 compute-0 sudo[264436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:09:48 compute-0 sudo[264436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:48 compute-0 sudo[264436]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:48 compute-0 sudo[264461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:09:48 compute-0 sudo[264461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:09:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:09:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:09:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:09:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:09:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:09:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:09:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:49 compute-0 ceph-mon[74456]: pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1238899685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:09:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:09:49 compute-0 podman[264526]: 2026-01-26 10:09:49.059531335 +0000 UTC m=+0.054240487 container create 917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:09:49 compute-0 podman[264526]: 2026-01-26 10:09:49.026531926 +0000 UTC m=+0.021241098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:09:49 compute-0 systemd[1]: Started libpod-conmon-917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756.scope.
Jan 26 10:09:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:09:49 compute-0 podman[264526]: 2026-01-26 10:09:49.216905865 +0000 UTC m=+0.211615037 container init 917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:09:49 compute-0 podman[264526]: 2026-01-26 10:09:49.227242486 +0000 UTC m=+0.221951638 container start 917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sammet, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 10:09:49 compute-0 wizardly_sammet[264542]: 167 167
Jan 26 10:09:49 compute-0 systemd[1]: libpod-917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756.scope: Deactivated successfully.
Jan 26 10:09:49 compute-0 podman[264526]: 2026-01-26 10:09:49.244342267 +0000 UTC m=+0.239051419 container attach 917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:09:49 compute-0 podman[264526]: 2026-01-26 10:09:49.24483848 +0000 UTC m=+0.239547642 container died 917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 10:09:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3804433d697f602f5fcf08ad6a57e890eaf1b0071dfaa06be4f8531ce4316569-merged.mount: Deactivated successfully.
Jan 26 10:09:49 compute-0 nova_compute[254880]: 2026-01-26 10:09:49.493 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:49 compute-0 podman[264526]: 2026-01-26 10:09:49.562641935 +0000 UTC m=+0.557351087 container remove 917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_sammet, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 26 10:09:49 compute-0 systemd[1]: libpod-conmon-917f7f42daec7f75c4a55f558325df94bfeb261e528e1af2767b805e65e9b756.scope: Deactivated successfully.
Jan 26 10:09:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:49 compute-0 podman[264568]: 2026-01-26 10:09:49.703148699 +0000 UTC m=+0.023692737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:09:49 compute-0 podman[264568]: 2026-01-26 10:09:49.875114388 +0000 UTC m=+0.195658396 container create 757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_archimedes, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:09:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:49 compute-0 systemd[1]: Started libpod-conmon-757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f.scope.
Jan 26 10:09:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a488a9f4cee88c3e0279e534d3352ce14711191b5e487b5a37b6d2a602225/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a488a9f4cee88c3e0279e534d3352ce14711191b5e487b5a37b6d2a602225/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a488a9f4cee88c3e0279e534d3352ce14711191b5e487b5a37b6d2a602225/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d25a488a9f4cee88c3e0279e534d3352ce14711191b5e487b5a37b6d2a602225/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:50 compute-0 podman[264568]: 2026-01-26 10:09:50.021813397 +0000 UTC m=+0.342357425 container init 757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:09:50 compute-0 podman[264568]: 2026-01-26 10:09:50.029880264 +0000 UTC m=+0.350424272 container start 757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 10:09:50 compute-0 podman[264568]: 2026-01-26 10:09:50.056256507 +0000 UTC m=+0.376800545 container attach 757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:09:50 compute-0 ceph-mon[74456]: pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:09:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:50.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]: {
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:     "0": [
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:         {
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "devices": [
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "/dev/loop3"
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             ],
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "lv_name": "ceph_lv0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "lv_size": "21470642176",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "name": "ceph_lv0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "tags": {
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.cluster_name": "ceph",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.crush_device_class": "",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.encrypted": "0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.osd_id": "0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.type": "block",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.vdo": "0",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:                 "ceph.with_tpm": "0"
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             },
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "type": "block",
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:             "vg_name": "ceph_vg0"
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:         }
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]:     ]
Jan 26 10:09:50 compute-0 recursing_archimedes[264584]: }
Jan 26 10:09:50 compute-0 systemd[1]: libpod-757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f.scope: Deactivated successfully.
Jan 26 10:09:50 compute-0 podman[264568]: 2026-01-26 10:09:50.326508963 +0000 UTC m=+0.647052971 container died 757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d25a488a9f4cee88c3e0279e534d3352ce14711191b5e487b5a37b6d2a602225-merged.mount: Deactivated successfully.
Jan 26 10:09:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:09:50 compute-0 podman[264568]: 2026-01-26 10:09:50.549143068 +0000 UTC m=+0.869687076 container remove 757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_archimedes, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:09:50 compute-0 systemd[1]: libpod-conmon-757ddafcd7bfff8794401c1bc7347a5698c48c05712d392ef56a2c34bcdd314f.scope: Deactivated successfully.
Jan 26 10:09:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:50.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:50 compute-0 sudo[264461]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:50 compute-0 sudo[264609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:09:50 compute-0 sudo[264609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:50 compute-0 sudo[264609]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:50 compute-0 sudo[264634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:09:50 compute-0 sudo[264634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:51 compute-0 podman[264699]: 2026-01-26 10:09:51.071990292 +0000 UTC m=+0.019341225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:09:51 compute-0 podman[264699]: 2026-01-26 10:09:51.460176538 +0000 UTC m=+0.407527451 container create 267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 10:09:51 compute-0 systemd[1]: Started libpod-conmon-267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c.scope.
Jan 26 10:09:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:09:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:09:51 compute-0 podman[264699]: 2026-01-26 10:09:51.644670399 +0000 UTC m=+0.592021332 container init 267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:09:51 compute-0 podman[264699]: 2026-01-26 10:09:51.652487189 +0000 UTC m=+0.599838122 container start 267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 10:09:51 compute-0 infallible_goldstine[264715]: 167 167
Jan 26 10:09:51 compute-0 podman[264699]: 2026-01-26 10:09:51.656896543 +0000 UTC m=+0.604247556 container attach 267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:09:51 compute-0 systemd[1]: libpod-267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c.scope: Deactivated successfully.
Jan 26 10:09:51 compute-0 podman[264699]: 2026-01-26 10:09:51.657877151 +0000 UTC m=+0.605228054 container died 267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:09:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9489bca15c1fdccd28d6cd1dc929fc0b37d9953d4150ddaee74f4a95646e3c39-merged.mount: Deactivated successfully.
Jan 26 10:09:51 compute-0 podman[264699]: 2026-01-26 10:09:51.788367564 +0000 UTC m=+0.735718477 container remove 267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_goldstine, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 10:09:51 compute-0 systemd[1]: libpod-conmon-267dce75dfa5a710dc3d3b20d2aeb76b49b0723fb8ba8f347fb5c0f253db279c.scope: Deactivated successfully.
Jan 26 10:09:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:52.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:52.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:52 compute-0 podman[264743]: 2026-01-26 10:09:52.485125983 +0000 UTC m=+0.023852462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:09:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:52 compute-0 podman[264743]: 2026-01-26 10:09:52.946870937 +0000 UTC m=+0.485597406 container create 2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:09:52 compute-0 ceph-mon[74456]: pgmap v829: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:09:53 compute-0 systemd[1]: Started libpod-conmon-2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa.scope.
Jan 26 10:09:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a9b27cd5e565eb9e95702867583441a8407f4347974362209d79c8a94af91d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a9b27cd5e565eb9e95702867583441a8407f4347974362209d79c8a94af91d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a9b27cd5e565eb9e95702867583441a8407f4347974362209d79c8a94af91d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a9b27cd5e565eb9e95702867583441a8407f4347974362209d79c8a94af91d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:09:53 compute-0 nova_compute[254880]: 2026-01-26 10:09:53.172 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:53 compute-0 podman[264743]: 2026-01-26 10:09:53.187450129 +0000 UTC m=+0.726176668 container init 2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclaren, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:09:53 compute-0 podman[264743]: 2026-01-26 10:09:53.195947948 +0000 UTC m=+0.734674417 container start 2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclaren, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 10:09:53 compute-0 podman[264743]: 2026-01-26 10:09:53.199546849 +0000 UTC m=+0.738273398 container attach 2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:09:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:09:53 compute-0 lvm[264834]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:09:53 compute-0 lvm[264834]: VG ceph_vg0 finished
Jan 26 10:09:53 compute-0 elated_mclaren[264760]: {}
Jan 26 10:09:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:53 compute-0 systemd[1]: libpod-2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa.scope: Deactivated successfully.
Jan 26 10:09:53 compute-0 podman[264743]: 2026-01-26 10:09:53.904663793 +0000 UTC m=+1.443390262 container died 2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:09:53 compute-0 systemd[1]: libpod-2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa.scope: Consumed 1.070s CPU time.
Jan 26 10:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a9b27cd5e565eb9e95702867583441a8407f4347974362209d79c8a94af91d-merged.mount: Deactivated successfully.
Jan 26 10:09:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:54.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:54 compute-0 ceph-mon[74456]: pgmap v830: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:09:54 compute-0 podman[264743]: 2026-01-26 10:09:54.252343898 +0000 UTC m=+1.791070357 container remove 2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclaren, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:09:54 compute-0 systemd[1]: libpod-conmon-2256f6fe3cb0c15fd6ab98b46c4f0cce19047bafde1e73a6dbebef69b514cafa.scope: Deactivated successfully.
Jan 26 10:09:54 compute-0 sudo[264634]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:09:54 compute-0 podman[264851]: 2026-01-26 10:09:54.392378659 +0000 UTC m=+0.080500687 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 26 10:09:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:09:54 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:54 compute-0 nova_compute[254880]: 2026-01-26 10:09:54.494 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:54 compute-0 sudo[264879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:09:54 compute-0 sudo[264879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:09:54 compute-0 sudo[264879]: pam_unix(sudo:session): session closed for user root
Jan 26 10:09:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:54.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:09:54.694 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:09:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:09:54.694 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:09:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:09:54.694 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:09:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:09:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:55 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:09:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1908295695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:09:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2437790993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:09:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:56.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:56] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 26 10:09:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:09:56] "GET /metrics HTTP/1.1" 200 48464 "" "Prometheus/2.51.0"
Jan 26 10:09:56 compute-0 ceph-mon[74456]: pgmap v831: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:09:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:09:57.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:09:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:09:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:09:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:09:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:09:58 compute-0 nova_compute[254880]: 2026-01-26 10:09:58.174 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:09:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:09:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:09:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:09:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:09:58 compute-0 ceph-mon[74456]: pgmap v832: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:09:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1722280942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:09:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1722280942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:09:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:09:59 compute-0 nova_compute[254880]: 2026-01-26 10:09:59.496 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:09:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:09:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:09:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 26 10:10:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 26 10:10:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] :      osd.2 observed slow operation indications in BlueStore
Jan 26 10:10:00 compute-0 ceph-mon[74456]: pgmap v833: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:10:00 compute-0 ceph-mon[74456]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore
Jan 26 10:10:00 compute-0 ceph-mon[74456]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 26 10:10:00 compute-0 ceph-mon[74456]:      osd.2 observed slow operation indications in BlueStore
Jan 26 10:10:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:10:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:10:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:10:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:00.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:10:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:02.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:02.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180047c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:02 compute-0 ceph-mon[74456]: pgmap v834: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:10:03 compute-0 nova_compute[254880]: 2026-01-26 10:10:03.175 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:03 compute-0 sudo[264912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:10:03 compute-0 sudo[264912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:03 compute-0 sudo[264912]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:10:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:04 compute-0 ceph-mon[74456]: pgmap v835: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:04 compute-0 nova_compute[254880]: 2026-01-26 10:10:04.551 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180047e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:06.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:06.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:06] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:10:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:06] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:10:06 compute-0 ceph-mon[74456]: pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:07.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:10:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:08.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:08 compute-0 nova_compute[254880]: 2026-01-26 10:10:08.205 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:08.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:08 compute-0 ceph-mon[74456]: pgmap v837: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:09 compute-0 nova_compute[254880]: 2026-01-26 10:10:09.586 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:10.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:10:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:10 compute-0 ceph-mon[74456]: pgmap v838: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Jan 26 10:10:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:12.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:12 compute-0 ceph-mon[74456]: pgmap v839: 353 pgs: 353 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Jan 26 10:10:13 compute-0 podman[264948]: 2026-01-26 10:10:13.113861233 +0000 UTC m=+0.049494910 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 10:10:13 compute-0 nova_compute[254880]: 2026-01-26 10:10:13.208 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 26 10:10:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:14 compute-0 nova_compute[254880]: 2026-01-26 10:10:14.589 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:14.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:15 compute-0 ceph-mon[74456]: pgmap v840: 353 pgs: 353 active+clean; 113 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 26 10:10:15 compute-0 sshd-session[264967]: Invalid user ubuntu from 117.50.196.2 port 40104
Jan 26 10:10:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:10:15 compute-0 sshd-session[264967]: Received disconnect from 117.50.196.2 port 40104:11:  [preauth]
Jan 26 10:10:15 compute-0 sshd-session[264967]: Disconnected from invalid user ubuntu 117.50.196.2 port 40104 [preauth]
Jan 26 10:10:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:16 compute-0 ceph-mon[74456]: pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:10:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:16.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:16] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:10:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:16] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:10:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:17.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:10:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:10:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:18 compute-0 nova_compute[254880]: 2026-01-26 10:10:18.213 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:18.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:10:18
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.control', 'backups', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data']
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:10:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:10:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:18 compute-0 ceph-mon[74456]: pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:10:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:10:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:10:19.059 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:10:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:10:19.060 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:10:19 compute-0 nova_compute[254880]: 2026-01-26 10:10:19.100 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:10:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:10:19 compute-0 nova_compute[254880]: 2026-01-26 10:10:19.590 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:19 compute-0 sshd-session[264975]: Invalid user postgres from 157.245.76.178 port 45116
Jan 26 10:10:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:19 compute-0 sshd-session[264975]: Connection closed by invalid user postgres 157.245.76.178 port 45116 [preauth]
Jan 26 10:10:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:10:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:20 compute-0 ceph-mon[74456]: pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 10:10:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:10:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:22 compute-0 ceph-mon[74456]: pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:10:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:10:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 3013 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1620 writes, 4804 keys, 1620 commit groups, 1.0 writes per commit group, ingest: 4.82 MB, 0.01 MB/s
                                           Interval WAL: 1620 writes, 710 syncs, 2.28 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 10:10:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:23 compute-0 nova_compute[254880]: 2026-01-26 10:10:23.217 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:23 compute-0 sudo[264982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:10:23 compute-0 sudo[264982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:23 compute-0 sudo[264982]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 103 KiB/s wr, 15 op/s
Jan 26 10:10:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:10:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:24.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:10:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:24 compute-0 nova_compute[254880]: 2026-01-26 10:10:24.592 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:24.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:24 compute-0 ceph-mon[74456]: pgmap v845: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 103 KiB/s wr, 15 op/s
Jan 26 10:10:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:25 compute-0 podman[265009]: 2026-01-26 10:10:25.145624799 +0000 UTC m=+0.078915215 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 10:10:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 106 KiB/s wr, 15 op/s
Jan 26 10:10:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:26.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:26] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 26 10:10:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:26] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 26 10:10:26 compute-0 ceph-mon[74456]: pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 106 KiB/s wr, 15 op/s
Jan 26 10:10:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4287714874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:27 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:10:27.062 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:10:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:27.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:10:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 26 10:10:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:28 compute-0 nova_compute[254880]: 2026-01-26 10:10:28.222 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:28.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:28 compute-0 ceph-mon[74456]: pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 26 10:10:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 26 10:10:29 compute-0 nova_compute[254880]: 2026-01-26 10:10:29.595 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:30.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:10:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:30.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:30 compute-0 ceph-mon[74456]: pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 26 10:10:30 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3449492847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:10:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 10:10:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4146777078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:10:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1769423089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:31 compute-0 nova_compute[254880]: 2026-01-26 10:10:31.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:31 compute-0 nova_compute[254880]: 2026-01-26 10:10:31.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:10:31 compute-0 nova_compute[254880]: 2026-01-26 10:10:31.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:10:31 compute-0 nova_compute[254880]: 2026-01-26 10:10:31.979 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:10:31 compute-0 nova_compute[254880]: 2026-01-26 10:10:31.979 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:31 compute-0 nova_compute[254880]: 2026-01-26 10:10:31.980 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:31 compute-0 nova_compute[254880]: 2026-01-26 10:10:31.980 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.005 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.005 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.006 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.006 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.007 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:10:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:10:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:32.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:10:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:10:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680251038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.502 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:10:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.678 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.679 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4652MB free_disk=59.92194747924805GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.679 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:10:32 compute-0 nova_compute[254880]: 2026-01-26 10:10:32.679 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:10:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:33 compute-0 ceph-mon[74456]: pgmap v849: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 10:10:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2218755935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1680251038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.217 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.218 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.225 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.241 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:10:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:10:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:10:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2371929807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.681 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.690 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.710 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.712 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:10:33 compute-0 nova_compute[254880]: 2026-01-26 10:10:33.712 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:10:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:10:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004860 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:34 compute-0 ceph-mon[74456]: pgmap v850: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:10:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2371929807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:34.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:34 compute-0 nova_compute[254880]: 2026-01-26 10:10:34.596 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:34.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2211083446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/577859797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 10:10:35 compute-0 nova_compute[254880]: 2026-01-26 10:10:35.707 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:35 compute-0 nova_compute[254880]: 2026-01-26 10:10:35.708 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:35 compute-0 nova_compute[254880]: 2026-01-26 10:10:35.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:35 compute-0 nova_compute[254880]: 2026-01-26 10:10:35.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:35 compute-0 nova_compute[254880]: 2026-01-26 10:10:35.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:10:35 compute-0 nova_compute[254880]: 2026-01-26 10:10:35.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:10:36 compute-0 ceph-mon[74456]: pgmap v851: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 10:10:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:36.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:36] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:10:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:36] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:10:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:37.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:10:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:37.137Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:10:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:37.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:10:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 26 10:10:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:38.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:38 compute-0 nova_compute[254880]: 2026-01-26 10:10:38.229 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:10:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:38.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:10:38 compute-0 ceph-mon[74456]: pgmap v852: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 26 10:10:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180048a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 26 10:10:39 compute-0 nova_compute[254880]: 2026-01-26 10:10:39.599 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:40.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:10:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:10:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:40.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:10:40 compute-0 ceph-mon[74456]: pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 26 10:10:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:10:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180048c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:42.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:42.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:42 compute-0 ceph-mon[74456]: pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:10:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:43 compute-0 nova_compute[254880]: 2026-01-26 10:10:43.234 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:43 compute-0 sudo[265099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:10:43 compute-0 sudo[265099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:43 compute-0 sudo[265099]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:43 compute-0 podman[265123]: 2026-01-26 10:10:43.596734639 +0000 UTC m=+0.075725652 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 10:10:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180048c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:44 compute-0 nova_compute[254880]: 2026-01-26 10:10:44.601 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:44.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:44 compute-0 ceph-mon[74456]: pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:10:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Jan 26 10:10:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:46.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:46] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:10:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:46] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:10:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180048e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:46 compute-0 ceph-mon[74456]: pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Jan 26 10:10:46 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 26 10:10:46 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:46.998604) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:10:46 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 26 10:10:46 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422246998629, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2124, "num_deletes": 251, "total_data_size": 4173662, "memory_usage": 4232064, "flush_reason": "Manual Compaction"}
Jan 26 10:10:46 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422247018690, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4034793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24763, "largest_seqno": 26886, "table_properties": {"data_size": 4025507, "index_size": 5780, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19611, "raw_average_key_size": 20, "raw_value_size": 4006696, "raw_average_value_size": 4134, "num_data_blocks": 254, "num_entries": 969, "num_filter_entries": 969, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422037, "oldest_key_time": 1769422037, "file_creation_time": 1769422246, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 20159 microseconds, and 7816 cpu microseconds.
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.018755) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4034793 bytes OK
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.018782) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.021045) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.021066) EVENT_LOG_v1 {"time_micros": 1769422247021059, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.021085) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4165071, prev total WAL file size 4165071, number of live WAL files 2.
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.022792) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3940KB)], [56(12MB)]
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422247022865, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16667198, "oldest_snapshot_seqno": -1}
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5820 keys, 14553715 bytes, temperature: kUnknown
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422247093021, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14553715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14513941, "index_size": 24112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147901, "raw_average_key_size": 25, "raw_value_size": 14407927, "raw_average_value_size": 2475, "num_data_blocks": 984, "num_entries": 5820, "num_filter_entries": 5820, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422247, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.093541) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14553715 bytes
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.095325) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.1 rd, 207.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.0 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6336, records dropped: 516 output_compression: NoCompression
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.095366) EVENT_LOG_v1 {"time_micros": 1769422247095346, "job": 30, "event": "compaction_finished", "compaction_time_micros": 70295, "compaction_time_cpu_micros": 29798, "output_level": 6, "num_output_files": 1, "total_output_size": 14553715, "num_input_records": 6336, "num_output_records": 5820, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422247097033, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422247101813, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.022653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.101953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.101964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.101968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.101971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:10:47 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:10:47.101974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:10:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:47.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:10:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 KiB/s wr, 4 op/s
Jan 26 10:10:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:48 compute-0 ceph-mon[74456]: pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 KiB/s wr, 4 op/s
Jan 26 10:10:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:48 compute-0 nova_compute[254880]: 2026-01-26 10:10:48.237 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:48.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:10:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:10:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:10:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:10:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:10:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:10:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:10:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:10:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 KiB/s wr, 4 op/s
Jan 26 10:10:49 compute-0 nova_compute[254880]: 2026-01-26 10:10:49.603 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004900 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:50.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:50 compute-0 ceph-mon[74456]: pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 2.7 KiB/s wr, 4 op/s
Jan 26 10:10:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:10:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:50.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 457 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 26 10:10:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:52.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004920 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:52.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:52 compute-0 ceph-mon[74456]: pgmap v859: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 457 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 26 10:10:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:53 compute-0 nova_compute[254880]: 2026-01-26 10:10:53.242 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 10:10:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:54 compute-0 nova_compute[254880]: 2026-01-26 10:10:54.606 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:10:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:10:54 compute-0 ceph-mon[74456]: pgmap v860: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 10:10:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:10:54.694 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:10:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:10:54.695 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:10:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:10:54.695 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:10:54 compute-0 sudo[265157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:10:54 compute-0 sudo[265157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:54 compute-0 sudo[265157]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:54 compute-0 sudo[265182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:10:54 compute-0 sudo[265182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180049d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:10:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:10:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:10:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:10:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:55 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:55 compute-0 sudo[265182]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:55 compute-0 sudo[265238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:10:55 compute-0 sudo[265238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:55 compute-0 sudo[265238]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 26 10:10:55 compute-0 sudo[265267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- inventory --format=json-pretty --filter-for-batch
Jan 26 10:10:55 compute-0 sudo[265267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:55 compute-0 podman[265262]: 2026-01-26 10:10:55.679187113 +0000 UTC m=+0.127996542 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 10:10:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:56 compute-0 podman[265358]: 2026-01-26 10:10:56.083079153 +0000 UTC m=+0.056262887 container create 098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_faraday, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:10:56 compute-0 systemd[1]: Started libpod-conmon-098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7.scope.
Jan 26 10:10:56 compute-0 podman[265358]: 2026-01-26 10:10:56.052100074 +0000 UTC m=+0.025283858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:10:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:10:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:56.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:56 compute-0 podman[265358]: 2026-01-26 10:10:56.186827993 +0000 UTC m=+0.160011767 container init 098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:10:56 compute-0 podman[265358]: 2026-01-26 10:10:56.194860046 +0000 UTC m=+0.168043750 container start 098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:10:56 compute-0 podman[265358]: 2026-01-26 10:10:56.199029046 +0000 UTC m=+0.172212800 container attach 098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_faraday, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:10:56 compute-0 vibrant_faraday[265374]: 167 167
Jan 26 10:10:56 compute-0 systemd[1]: libpod-098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7.scope: Deactivated successfully.
Jan 26 10:10:56 compute-0 conmon[265374]: conmon 098d7e420a11b5016ef3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7.scope/container/memory.events
Jan 26 10:10:56 compute-0 podman[265358]: 2026-01-26 10:10:56.208731572 +0000 UTC m=+0.181915306 container died 098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 10:10:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:56 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2141004236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:10:56 compute-0 ceph-mon[74456]: pgmap v861: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 26 10:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-30225df477c7adb789ccf00487732a65c79be7bcc2eeed4442d51a60a8d5902e-merged.mount: Deactivated successfully.
Jan 26 10:10:56 compute-0 podman[265358]: 2026-01-26 10:10:56.271789628 +0000 UTC m=+0.244973362 container remove 098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_faraday, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:10:56 compute-0 systemd[1]: libpod-conmon-098d7e420a11b5016ef3f453331d721f1f26ef48440bcef4633289bb5d9366b7.scope: Deactivated successfully.
Jan 26 10:10:56 compute-0 podman[265398]: 2026-01-26 10:10:56.479423033 +0000 UTC m=+0.056612536 container create e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaum, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 10:10:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:56 compute-0 systemd[1]: Started libpod-conmon-e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8.scope.
Jan 26 10:10:56 compute-0 podman[265398]: 2026-01-26 10:10:56.449910573 +0000 UTC m=+0.027100156 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:10:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1911840d3cfdbc2d26c980b9089b89fbb528b9685d0daf30c18b9580f48deb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1911840d3cfdbc2d26c980b9089b89fbb528b9685d0daf30c18b9580f48deb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1911840d3cfdbc2d26c980b9089b89fbb528b9685d0daf30c18b9580f48deb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1911840d3cfdbc2d26c980b9089b89fbb528b9685d0daf30c18b9580f48deb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:56 compute-0 podman[265398]: 2026-01-26 10:10:56.563982296 +0000 UTC m=+0.141171819 container init e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaum, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:10:56 compute-0 podman[265398]: 2026-01-26 10:10:56.577051612 +0000 UTC m=+0.154241125 container start e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:10:56 compute-0 podman[265398]: 2026-01-26 10:10:56.581987842 +0000 UTC m=+0.159177385 container attach e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaum, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:10:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:56] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:10:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:10:56] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:10:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:10:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:10:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:10:57.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:10:57 compute-0 cool_chaum[265416]: [
Jan 26 10:10:57 compute-0 cool_chaum[265416]:     {
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "available": false,
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "being_replaced": false,
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "ceph_device_lvm": false,
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "lsm_data": {},
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "lvs": [],
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "path": "/dev/sr0",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "rejected_reasons": [
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "Insufficient space (<5GB)",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "Has a FileSystem"
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         ],
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         "sys_api": {
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "actuators": null,
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "device_nodes": [
Jan 26 10:10:57 compute-0 cool_chaum[265416]:                 "sr0"
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             ],
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "devname": "sr0",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "human_readable_size": "482.00 KB",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "id_bus": "ata",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "model": "QEMU DVD-ROM",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "nr_requests": "2",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "parent": "/dev/sr0",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "partitions": {},
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "path": "/dev/sr0",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "removable": "1",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "rev": "2.5+",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "ro": "0",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "rotational": "1",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "sas_address": "",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "sas_device_handle": "",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "scheduler_mode": "mq-deadline",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "sectors": 0,
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "sectorsize": "2048",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "size": 493568.0,
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "support_discard": "2048",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "type": "disk",
Jan 26 10:10:57 compute-0 cool_chaum[265416]:             "vendor": "QEMU"
Jan 26 10:10:57 compute-0 cool_chaum[265416]:         }
Jan 26 10:10:57 compute-0 cool_chaum[265416]:     }
Jan 26 10:10:57 compute-0 cool_chaum[265416]: ]
Jan 26 10:10:57 compute-0 systemd[1]: libpod-e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8.scope: Deactivated successfully.
Jan 26 10:10:57 compute-0 podman[265398]: 2026-01-26 10:10:57.405839876 +0000 UTC m=+0.983029379 container died e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaum, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1911840d3cfdbc2d26c980b9089b89fbb528b9685d0daf30c18b9580f48deb3-merged.mount: Deactivated successfully.
Jan 26 10:10:57 compute-0 podman[265398]: 2026-01-26 10:10:57.448726828 +0000 UTC m=+1.025916331 container remove e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_chaum, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:10:57 compute-0 systemd[1]: libpod-conmon-e0a1c7f06d83b83a2b9393a2eda6a43fbe92f0851285dfac1010d1ef93b571b8.scope: Deactivated successfully.
Jan 26 10:10:57 compute-0 sudo[265267]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:10:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:10:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Jan 26 10:10:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb180049f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:10:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:58 compute-0 nova_compute[254880]: 2026-01-26 10:10:58.245 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:10:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:58 compute-0 ceph-mon[74456]: pgmap v862: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Jan 26 10:10:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:10:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:10:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:10:58.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:10:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:10:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:10:58 compute-0 sudo[266795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:10:58 compute-0 sudo[266795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:58 compute-0 sudo[266795]: pam_unix(sudo:session): session closed for user root
Jan 26 10:10:58 compute-0 sudo[266820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:10:58 compute-0 sudo[266820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:10:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:10:59 compute-0 podman[266887]: 2026-01-26 10:10:59.381994949 +0000 UTC m=+0.047016003 container create 226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:10:59 compute-0 systemd[1]: Started libpod-conmon-226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8.scope.
Jan 26 10:10:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:10:59 compute-0 podman[266887]: 2026-01-26 10:10:59.363711896 +0000 UTC m=+0.028732990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:10:59 compute-0 podman[266887]: 2026-01-26 10:10:59.48913982 +0000 UTC m=+0.154160904 container init 226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:10:59 compute-0 podman[266887]: 2026-01-26 10:10:59.495883677 +0000 UTC m=+0.160904731 container start 226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:10:59 compute-0 podman[266887]: 2026-01-26 10:10:59.500172381 +0000 UTC m=+0.165193485 container attach 226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:10:59 compute-0 clever_mcclintock[266904]: 167 167
Jan 26 10:10:59 compute-0 systemd[1]: libpod-226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8.scope: Deactivated successfully.
Jan 26 10:10:59 compute-0 conmon[266904]: conmon 226c858a076befe77a09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8.scope/container/memory.events
Jan 26 10:10:59 compute-0 podman[266887]: 2026-01-26 10:10:59.509359593 +0000 UTC m=+0.174380647 container died 226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 10:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca38b5a97d94b5ae78131112eaa21a7bb8bd84dee111289fec7392a67a900025-merged.mount: Deactivated successfully.
Jan 26 10:10:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Jan 26 10:10:59 compute-0 nova_compute[254880]: 2026-01-26 10:10:59.608 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:10:59 compute-0 podman[266887]: 2026-01-26 10:10:59.636184454 +0000 UTC m=+0.301205518 container remove 226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:10:59 compute-0 systemd[1]: libpod-conmon-226c858a076befe77a097369fea18f3ae61fd5da74b3fb7106a73fa001c272f8.scope: Deactivated successfully.
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:10:59 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:10:59 compute-0 podman[266928]: 2026-01-26 10:10:59.805390624 +0000 UTC m=+0.043584163 container create 411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:10:59 compute-0 systemd[1]: Started libpod-conmon-411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a.scope.
Jan 26 10:10:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8786e5e52b7a8fa7a35df4e84bd16cd222368b75851acc6950766e7f4a015503/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8786e5e52b7a8fa7a35df4e84bd16cd222368b75851acc6950766e7f4a015503/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8786e5e52b7a8fa7a35df4e84bd16cd222368b75851acc6950766e7f4a015503/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:59 compute-0 podman[266928]: 2026-01-26 10:10:59.786577926 +0000 UTC m=+0.024771485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8786e5e52b7a8fa7a35df4e84bd16cd222368b75851acc6950766e7f4a015503/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8786e5e52b7a8fa7a35df4e84bd16cd222368b75851acc6950766e7f4a015503/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:10:59 compute-0 podman[266928]: 2026-01-26 10:10:59.893697067 +0000 UTC m=+0.131890626 container init 411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_jemison, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 10:10:59 compute-0 podman[266928]: 2026-01-26 10:10:59.906613858 +0000 UTC m=+0.144807397 container start 411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:10:59 compute-0 podman[266928]: 2026-01-26 10:10:59.910264283 +0000 UTC m=+0.148457822 container attach 411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:10:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:10:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:11:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:00.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:11:00 compute-0 quizzical_jemison[266944]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:11:00 compute-0 quizzical_jemison[266944]: --> All data devices are unavailable
Jan 26 10:11:00 compute-0 systemd[1]: libpod-411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a.scope: Deactivated successfully.
Jan 26 10:11:00 compute-0 podman[266928]: 2026-01-26 10:11:00.291785672 +0000 UTC m=+0.529979251 container died 411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_jemison, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8786e5e52b7a8fa7a35df4e84bd16cd222368b75851acc6950766e7f4a015503-merged.mount: Deactivated successfully.
Jan 26 10:11:00 compute-0 podman[266928]: 2026-01-26 10:11:00.341908417 +0000 UTC m=+0.580101966 container remove 411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:11:00 compute-0 systemd[1]: libpod-conmon-411c9a959beb5f12ddeac81747c84238cfe8557337f00ac41266d6619ea6e19a.scope: Deactivated successfully.
Jan 26 10:11:00 compute-0 sudo[266820]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:00 compute-0 sudo[266974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:11:00 compute-0 sudo[266974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:00 compute-0 sudo[266974]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:11:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004a10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:00 compute-0 sudo[267000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:11:00 compute-0 sudo[267000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:00.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:00 compute-0 ceph-mon[74456]: pgmap v863: 353 pgs: 353 active+clean; 121 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Jan 26 10:11:00 compute-0 podman[267068]: 2026-01-26 10:11:00.925654907 +0000 UTC m=+0.050958608 container create 6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 10:11:00 compute-0 systemd[1]: Started libpod-conmon-6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b.scope.
Jan 26 10:11:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:00 compute-0 podman[267068]: 2026-01-26 10:11:00.899552237 +0000 UTC m=+0.024855948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:11:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:11:01 compute-0 podman[267068]: 2026-01-26 10:11:01.030787194 +0000 UTC m=+0.156090875 container init 6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_solomon, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:11:01 compute-0 podman[267068]: 2026-01-26 10:11:01.04010772 +0000 UTC m=+0.165411381 container start 6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:11:01 compute-0 podman[267068]: 2026-01-26 10:11:01.043466949 +0000 UTC m=+0.168770660 container attach 6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_solomon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:11:01 compute-0 quizzical_solomon[267086]: 167 167
Jan 26 10:11:01 compute-0 systemd[1]: libpod-6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b.scope: Deactivated successfully.
Jan 26 10:11:01 compute-0 podman[267068]: 2026-01-26 10:11:01.045550924 +0000 UTC m=+0.170854625 container died 6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a5416930a3244083d43a061d6f2b57cb7140a384d5c0e6cedada42b9a60ac05-merged.mount: Deactivated successfully.
Jan 26 10:11:01 compute-0 podman[267068]: 2026-01-26 10:11:01.086486585 +0000 UTC m=+0.211790246 container remove 6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:11:01 compute-0 systemd[1]: libpod-conmon-6ddaf5833557d49245457eb6b08eece190f37988431a9ab22101ebc75c15159b.scope: Deactivated successfully.
Jan 26 10:11:01 compute-0 podman[267110]: 2026-01-26 10:11:01.289106488 +0000 UTC m=+0.047165537 container create 947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_morse, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:11:01 compute-0 systemd[1]: Started libpod-conmon-947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8.scope.
Jan 26 10:11:01 compute-0 podman[267110]: 2026-01-26 10:11:01.266439619 +0000 UTC m=+0.024498688 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:11:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8810172f75be7ac57e33d4f371afef224b61b3423153829902a7b1ff14750f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8810172f75be7ac57e33d4f371afef224b61b3423153829902a7b1ff14750f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8810172f75be7ac57e33d4f371afef224b61b3423153829902a7b1ff14750f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8810172f75be7ac57e33d4f371afef224b61b3423153829902a7b1ff14750f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:01 compute-0 podman[267110]: 2026-01-26 10:11:01.384565929 +0000 UTC m=+0.142625028 container init 947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_morse, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:11:01 compute-0 podman[267110]: 2026-01-26 10:11:01.393139466 +0000 UTC m=+0.151198526 container start 947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:11:01 compute-0 podman[267110]: 2026-01-26 10:11:01.396857534 +0000 UTC m=+0.154916633 container attach 947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_morse, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:11:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 419 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 26 10:11:01 compute-0 relaxed_morse[267127]: {
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:     "0": [
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:         {
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "devices": [
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "/dev/loop3"
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             ],
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "lv_name": "ceph_lv0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "lv_size": "21470642176",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "name": "ceph_lv0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "tags": {
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.cluster_name": "ceph",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.crush_device_class": "",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.encrypted": "0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.osd_id": "0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.type": "block",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.vdo": "0",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:                 "ceph.with_tpm": "0"
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             },
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "type": "block",
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:             "vg_name": "ceph_vg0"
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:         }
Jan 26 10:11:01 compute-0 relaxed_morse[267127]:     ]
Jan 26 10:11:01 compute-0 relaxed_morse[267127]: }
Jan 26 10:11:01 compute-0 systemd[1]: libpod-947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8.scope: Deactivated successfully.
Jan 26 10:11:01 compute-0 podman[267110]: 2026-01-26 10:11:01.675098975 +0000 UTC m=+0.433158014 container died 947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_morse, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e8810172f75be7ac57e33d4f371afef224b61b3423153829902a7b1ff14750f-merged.mount: Deactivated successfully.
Jan 26 10:11:01 compute-0 podman[267110]: 2026-01-26 10:11:01.719364383 +0000 UTC m=+0.477423452 container remove 947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_morse, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:11:01 compute-0 systemd[1]: libpod-conmon-947da28f26d6f71787118f225491b9377d89327e67440c810db73f902482a9a8.scope: Deactivated successfully.
Jan 26 10:11:01 compute-0 sudo[267000]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:01 compute-0 sudo[267147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:11:01 compute-0 sudo[267147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:01 compute-0 sudo[267147]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:01 compute-0 sudo[267172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:11:01 compute-0 sudo[267172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:11:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:11:02 compute-0 podman[267238]: 2026-01-26 10:11:02.320264188 +0000 UTC m=+0.039786563 container create 3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:11:02 compute-0 systemd[1]: Started libpod-conmon-3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1.scope.
Jan 26 10:11:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:11:02 compute-0 podman[267238]: 2026-01-26 10:11:02.300641119 +0000 UTC m=+0.020163504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:11:02 compute-0 podman[267238]: 2026-01-26 10:11:02.398567306 +0000 UTC m=+0.118089701 container init 3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:11:02 compute-0 podman[267238]: 2026-01-26 10:11:02.408838037 +0000 UTC m=+0.128360402 container start 3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lehmann, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 10:11:02 compute-0 podman[267238]: 2026-01-26 10:11:02.413304465 +0000 UTC m=+0.132826830 container attach 3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:11:02 compute-0 relaxed_lehmann[267254]: 167 167
Jan 26 10:11:02 compute-0 systemd[1]: libpod-3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1.scope: Deactivated successfully.
Jan 26 10:11:02 compute-0 podman[267238]: 2026-01-26 10:11:02.419024046 +0000 UTC m=+0.138546511 container died 3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lehmann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 26 10:11:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-75c52bd5fa398dffdd3f4525c7aabdba4dac419f459ad4e7ace3f485c4b5120b-merged.mount: Deactivated successfully.
Jan 26 10:11:02 compute-0 podman[267238]: 2026-01-26 10:11:02.462279889 +0000 UTC m=+0.181802254 container remove 3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:11:02 compute-0 systemd[1]: libpod-conmon-3bd9b7448866426ffb843475b8ed42e3f31360f4a7b66290205af52fdb2845c1.scope: Deactivated successfully.
Jan 26 10:11:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:02 compute-0 podman[267282]: 2026-01-26 10:11:02.624149745 +0000 UTC m=+0.044095276 container create 5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 10:11:02 compute-0 systemd[1]: Started libpod-conmon-5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba.scope.
Jan 26 10:11:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556630199cc46295d0d30b70dfa471a5d936be1058899673439736504df0b706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556630199cc46295d0d30b70dfa471a5d936be1058899673439736504df0b706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556630199cc46295d0d30b70dfa471a5d936be1058899673439736504df0b706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556630199cc46295d0d30b70dfa471a5d936be1058899673439736504df0b706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:02 compute-0 podman[267282]: 2026-01-26 10:11:02.604580268 +0000 UTC m=+0.024525809 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:11:02 compute-0 podman[267282]: 2026-01-26 10:11:02.707882507 +0000 UTC m=+0.127828068 container init 5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:11:02 compute-0 podman[267282]: 2026-01-26 10:11:02.716802863 +0000 UTC m=+0.136748384 container start 5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:11:02 compute-0 podman[267282]: 2026-01-26 10:11:02.720794648 +0000 UTC m=+0.140740179 container attach 5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:11:02 compute-0 ceph-mon[74456]: pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 419 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 26 10:11:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3970104987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:02 compute-0 sshd-session[267275]: Invalid user postgres from 157.245.76.178 port 39648
Jan 26 10:11:02 compute-0 sshd-session[267275]: Connection closed by invalid user postgres 157.245.76.178 port 39648 [preauth]
Jan 26 10:11:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:03 compute-0 nova_compute[254880]: 2026-01-26 10:11:03.248 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:03 compute-0 lvm[267373]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:11:03 compute-0 lvm[267373]: VG ceph_vg0 finished
Jan 26 10:11:03 compute-0 nostalgic_mahavira[267299]: {}
Jan 26 10:11:03 compute-0 systemd[1]: libpod-5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba.scope: Deactivated successfully.
Jan 26 10:11:03 compute-0 systemd[1]: libpod-5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba.scope: Consumed 1.154s CPU time.
Jan 26 10:11:03 compute-0 podman[267282]: 2026-01-26 10:11:03.436293529 +0000 UTC m=+0.856239060 container died 5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:11:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-556630199cc46295d0d30b70dfa471a5d936be1058899673439736504df0b706-merged.mount: Deactivated successfully.
Jan 26 10:11:03 compute-0 podman[267282]: 2026-01-26 10:11:03.480125847 +0000 UTC m=+0.900071378 container remove 5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:11:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:03 compute-0 systemd[1]: libpod-conmon-5d07580976042a082698682a4ddfca5ec7e66c441ba7ec9d0a154cb501a0f9ba.scope: Deactivated successfully.
Jan 26 10:11:03 compute-0 sudo[267172]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:11:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:11:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:11:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:11:03 compute-0 sudo[267390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:11:03 compute-0 sudo[267390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:03 compute-0 sudo[267390]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 41 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 14 KiB/s wr, 46 op/s
Jan 26 10:11:03 compute-0 sudo[267413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:11:03 compute-0 sudo[267413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:03 compute-0 sudo[267413]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:11:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:11:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:11:04 compute-0 ceph-mon[74456]: pgmap v865: 353 pgs: 353 active+clean; 41 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 14 KiB/s wr, 46 op/s
Jan 26 10:11:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:04 compute-0 nova_compute[254880]: 2026-01-26 10:11:04.610 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:04.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Jan 26 10:11:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:06.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:06] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:11:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:06] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:11:06 compute-0 ceph-mon[74456]: pgmap v866: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Jan 26 10:11:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:06.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:11:07.140Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:11:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:11:07.140Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:11:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:11:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:08.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:08 compute-0 nova_compute[254880]: 2026-01-26 10:11:08.295 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:08 compute-0 ceph-mon[74456]: pgmap v867: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:11:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:08.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:11:09 compute-0 nova_compute[254880]: 2026-01-26 10:11:09.644 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:10.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:11:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:10.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:10 compute-0 ceph-mon[74456]: pgmap v868: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:11:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:11:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:12.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:12.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:12 compute-0 ceph-mon[74456]: pgmap v869: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:11:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:13 compute-0 nova_compute[254880]: 2026-01-26 10:11:13.300 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 597 B/s wr, 10 op/s
Jan 26 10:11:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:14 compute-0 podman[267451]: 2026-01-26 10:11:14.151910482 +0000 UTC m=+0.086554027 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:11:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:14.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:14 compute-0 nova_compute[254880]: 2026-01-26 10:11:14.646 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:14.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:14 compute-0 ceph-mon[74456]: pgmap v870: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 597 B/s wr, 10 op/s
Jan 26 10:11:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 597 B/s wr, 11 op/s
Jan 26 10:11:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:16] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:11:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:16] "GET /metrics HTTP/1.1" 200 48467 "" "Prometheus/2.51.0"
Jan 26 10:11:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:16.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:16 compute-0 ceph-mon[74456]: pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 597 B/s wr, 11 op/s
Jan 26 10:11:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:11:17.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:11:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:18 compute-0 nova_compute[254880]: 2026-01-26 10:11:18.303 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:11:18
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['vms', '.nfs', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta', '.mgr']
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:11:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:18.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:11:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:18 compute-0 ceph-mon[74456]: pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:11:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:11:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:11:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:19 compute-0 nova_compute[254880]: 2026-01-26 10:11:19.647 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004af0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:20 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:20.056 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:11:20 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:20.056 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:11:20 compute-0 nova_compute[254880]: 2026-01-26 10:11:20.057 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:20.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:11:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:20.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:20 compute-0 ceph-mon[74456]: pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 26 10:11:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:22.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:22.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:22 compute-0 ceph-mon[74456]: pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 26 10:11:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10001bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:23 compute-0 nova_compute[254880]: 2026-01-26 10:11:23.309 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:23 compute-0 sudo[267480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:11:23 compute-0 sudo[267480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:23 compute-0 sudo[267480]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:24.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:24 compute-0 nova_compute[254880]: 2026-01-26 10:11:24.649 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:24 compute-0 ceph-mon[74456]: pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:25 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:25.060 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.229 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.230 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.248 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.318 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.319 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.328 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.328 254884 INFO nova.compute.claims [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.515 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:11:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:11:25 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018372751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.938 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:25 compute-0 nova_compute[254880]: 2026-01-26 10:11:25.943 254884 DEBUG nova.compute.provider_tree [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:11:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004b30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:26 compute-0 podman[267530]: 2026-01-26 10:11:26.157332804 +0000 UTC m=+0.090004036 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 10:11:26 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2018372751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:26.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.587 254884 DEBUG nova.scheduler.client.report [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.615 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.616 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 10:11:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:26] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 26 10:11:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:26] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.679 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.680 254884 DEBUG nova.network.neutron [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.702 254884 INFO nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 10:11:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:26.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.720 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.853 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.855 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.856 254884 INFO nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Creating image(s)
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.899 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.924 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.951 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.955 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:26 compute-0 nova_compute[254880]: 2026-01-26 10:11:26.973 254884 DEBUG nova.policy [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1208d3e25b940ea93fe76884c7a53db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 10:11:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.011 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.013 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "d81880e926e175d0cc7241caa7cc18231a8a289c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.014 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.014 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.050 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.054 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 26741812-4ddf-457d-b571-7e2005b5133d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:11:27.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:11:27 compute-0 ceph-mon[74456]: pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.347 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 26741812-4ddf-457d-b571-7e2005b5133d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.425 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] resizing rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.527 254884 DEBUG nova.objects.instance [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.541 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.541 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Ensure instance console log exists: /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.542 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.542 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:27 compute-0 nova_compute[254880]: 2026-01-26 10:11:27.543 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:28 compute-0 ceph-mon[74456]: pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:28.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:28 compute-0 nova_compute[254880]: 2026-01-26 10:11:28.357 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004b50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:28.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:28 compute-0 nova_compute[254880]: 2026-01-26 10:11:28.931 254884 DEBUG nova.network.neutron [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Successfully created port: 92a5f80f-60e2-449d-9da8-ebaa31f1476c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 10:11:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:29 compute-0 nova_compute[254880]: 2026-01-26 10:11:29.652 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:30.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:11:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:30 compute-0 ceph-mon[74456]: pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 26 10:11:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:30 compute-0 nova_compute[254880]: 2026-01-26 10:11:30.940 254884 DEBUG nova.network.neutron [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Successfully updated port: 92a5f80f-60e2-449d-9da8-ebaa31f1476c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 10:11:30 compute-0 nova_compute[254880]: 2026-01-26 10:11:30.964 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:11:30 compute-0 nova_compute[254880]: 2026-01-26 10:11:30.964 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:11:30 compute-0 nova_compute[254880]: 2026-01-26 10:11:30.965 254884 DEBUG nova.network.neutron [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 10:11:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004b70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:31 compute-0 nova_compute[254880]: 2026-01-26 10:11:31.097 254884 DEBUG nova.compute.manager [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-changed-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:11:31 compute-0 nova_compute[254880]: 2026-01-26 10:11:31.098 254884 DEBUG nova.compute.manager [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing instance network info cache due to event network-changed-92a5f80f-60e2-449d-9da8-ebaa31f1476c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:11:31 compute-0 nova_compute[254880]: 2026-01-26 10:11:31.099 254884 DEBUG oslo_concurrency.lockutils [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:11:31 compute-0 nova_compute[254880]: 2026-01-26 10:11:31.145 254884 DEBUG nova.network.neutron [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 10:11:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:11:31 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1411173654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:31 compute-0 nova_compute[254880]: 2026-01-26 10:11:31.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.004 254884 DEBUG nova.network.neutron [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.021 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.022 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Instance network_info: |[{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.022 254884 DEBUG oslo_concurrency.lockutils [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.022 254884 DEBUG nova.network.neutron [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing network info cache for port 92a5f80f-60e2-449d-9da8-ebaa31f1476c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.025 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Start _get_guest_xml network_info=[{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'device_type': 'disk', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'image_id': '6789692f-fc1f-4efa-ae75-dcc13be695ef'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.029 254884 WARNING nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.158 254884 DEBUG nova.virt.libvirt.host [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.159 254884 DEBUG nova.virt.libvirt.host [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.164 254884 DEBUG nova.virt.libvirt.host [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.165 254884 DEBUG nova.virt.libvirt.host [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.165 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.165 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T10:05:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='57e1601b-dbfa-4d3b-8b96-27302e4a7a06',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.166 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.166 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.166 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.166 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.167 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.167 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.167 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.167 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.167 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.168 254884 DEBUG nova.virt.hardware [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.170 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:11:32 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364752818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.613 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.637 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.641 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:32.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:32 compute-0 ceph-mon[74456]: pgmap v879: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:11:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3698987314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1364752818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.979 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.980 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.980 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.980 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:11:32 compute-0 nova_compute[254880]: 2026-01-26 10:11:32.981 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:11:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361096299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.108 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.110 254884 DEBUG nova.virt.libvirt.vif [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:11:26Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.110 254884 DEBUG nova.network.os_vif_util [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.111 254884 DEBUG nova.network.os_vif_util [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:a5:e7,bridge_name='br-int',has_traffic_filtering=True,id=92a5f80f-60e2-449d-9da8-ebaa31f1476c,network=Network(856aef2b-c9c5-4069-832f-1db92e31d6c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92a5f80f-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.113 254884 DEBUG nova.objects.instance [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.133 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <uuid>26741812-4ddf-457d-b571-7e2005b5133d</uuid>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <name>instance-00000006</name>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <memory>131072</memory>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <vcpu>1</vcpu>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <nova:creationTime>2026-01-26 10:11:32</nova:creationTime>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <nova:flavor name="m1.nano">
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:memory>128</nova:memory>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:disk>1</nova:disk>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:swap>0</nova:swap>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:vcpus>1</nova:vcpus>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       </nova:flavor>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <nova:owner>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       </nova:owner>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <nova:ports>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:11:33 compute-0 nova_compute[254880]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         </nova:port>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       </nova:ports>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </nova:instance>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <sysinfo type="smbios">
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <system>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <entry name="manufacturer">RDO</entry>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <entry name="product">OpenStack Compute</entry>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <entry name="serial">26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <entry name="uuid">26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <entry name="family">Virtual Machine</entry>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </system>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <os>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <boot dev="hd"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <smbios mode="sysinfo"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   </os>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <features>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <vmcoreinfo/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   </features>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <clock offset="utc">
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <timer name="hpet" present="no"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <cpu mode="host-model" match="exact">
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <disk type="network" device="disk">
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/26741812-4ddf-457d-b571-7e2005b5133d_disk">
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       </source>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <target dev="vda" bus="virtio"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <disk type="network" device="cdrom">
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/26741812-4ddf-457d-b571-7e2005b5133d_disk.config">
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       </source>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:11:33 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <target dev="sda" bus="sata"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <interface type="ethernet">
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <mac address="fa:16:3e:1b:a5:e7"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <mtu size="1442"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <target dev="tap92a5f80f-60"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <serial type="pty">
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <log file="/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log" append="off"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <video>
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </video>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <input type="tablet" bus="usb"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <rng model="virtio">
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <backend model="random">/dev/urandom</backend>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <controller type="usb" index="0"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     <memballoon model="virtio">
Jan 26 10:11:33 compute-0 nova_compute[254880]:       <stats period="10"/>
Jan 26 10:11:33 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:11:33 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:11:33 compute-0 nova_compute[254880]: </domain>
Jan 26 10:11:33 compute-0 nova_compute[254880]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.135 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Preparing to wait for external event network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.141 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.141 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.142 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.142 254884 DEBUG nova.virt.libvirt.vif [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:11:26Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.142 254884 DEBUG nova.network.os_vif_util [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.143 254884 DEBUG nova.network.os_vif_util [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:a5:e7,bridge_name='br-int',has_traffic_filtering=True,id=92a5f80f-60e2-449d-9da8-ebaa31f1476c,network=Network(856aef2b-c9c5-4069-832f-1db92e31d6c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92a5f80f-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.143 254884 DEBUG os_vif [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:a5:e7,bridge_name='br-int',has_traffic_filtering=True,id=92a5f80f-60e2-449d-9da8-ebaa31f1476c,network=Network(856aef2b-c9c5-4069-832f-1db92e31d6c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92a5f80f-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.144 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.144 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.145 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.148 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.148 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92a5f80f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.149 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92a5f80f-60, col_values=(('external_ids', {'iface-id': '92a5f80f-60e2-449d-9da8-ebaa31f1476c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:a5:e7', 'vm-uuid': '26741812-4ddf-457d-b571-7e2005b5133d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:11:33 compute-0 NetworkManager[48970]: <info>  [1769422293.1517] manager: (tap92a5f80f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.150 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.152 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.157 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.158 254884 INFO os_vif [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:a5:e7,bridge_name='br-int',has_traffic_filtering=True,id=92a5f80f-60e2-449d-9da8-ebaa31f1476c,network=Network(856aef2b-c9c5-4069-832f-1db92e31d6c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92a5f80f-60')
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.212 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.213 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.213 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No VIF found with MAC fa:16:3e:1b:a5:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.214 254884 INFO nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Using config drive
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.239 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:11:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:11:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050430947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.455 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.533 254884 DEBUG nova.network.neutron [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updated VIF entry in instance network info cache for port 92a5f80f-60e2-449d-9da8-ebaa31f1476c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.533 254884 DEBUG nova.network.neutron [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:11:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:11:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:11:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.760 254884 INFO nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Creating config drive at /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/disk.config
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.765 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbvlu_wf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.897 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbvlu_wf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.926 254884 DEBUG nova.storage.rbd_utils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 26741812-4ddf-457d-b571-7e2005b5133d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:11:33 compute-0 nova_compute[254880]: 2026-01-26 10:11:33.930 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/disk.config 26741812-4ddf-457d-b571-7e2005b5133d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2361096299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:11:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1050430947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.143 254884 DEBUG oslo_concurrency.lockutils [req-2febdba5-251e-4162-a108-424c0ef3532c req-c0eb5b4f-9d84-4a3f-afa6-27069387ed6b b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.161 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.161 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.166 254884 DEBUG oslo_concurrency.processutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/disk.config 26741812-4ddf-457d-b571-7e2005b5133d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.166 254884 INFO nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Deleting local config drive /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/disk.config because it was imported into RBD.
Jan 26 10:11:34 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 26 10:11:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:34 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 26 10:11:34 compute-0 kernel: tap92a5f80f-60: entered promiscuous mode
Jan 26 10:11:34 compute-0 NetworkManager[48970]: <info>  [1769422294.2698] manager: (tap92a5f80f-60): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.314 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 ovn_controller[155832]: 2026-01-26T10:11:34Z|00037|binding|INFO|Claiming lport 92a5f80f-60e2-449d-9da8-ebaa31f1476c for this chassis.
Jan 26 10:11:34 compute-0 ovn_controller[155832]: 2026-01-26T10:11:34Z|00038|binding|INFO|92a5f80f-60e2-449d-9da8-ebaa31f1476c: Claiming fa:16:3e:1b:a5:e7 10.100.0.11
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.321 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 systemd-udevd[267907]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:11:34 compute-0 systemd-machined[221254]: New machine qemu-2-instance-00000006.
Jan 26 10:11:34 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Jan 26 10:11:34 compute-0 NetworkManager[48970]: <info>  [1769422294.3670] device (tap92a5f80f-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 10:11:34 compute-0 NetworkManager[48970]: <info>  [1769422294.3678] device (tap92a5f80f-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 10:11:34 compute-0 ovn_controller[155832]: 2026-01-26T10:11:34Z|00039|binding|INFO|Setting lport 92a5f80f-60e2-449d-9da8-ebaa31f1476c ovn-installed in OVS
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.386 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.390 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.391 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4619MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.391 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.392 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:34 compute-0 ovn_controller[155832]: 2026-01-26T10:11:34Z|00040|binding|INFO|Setting lport 92a5f80f-60e2-449d-9da8-ebaa31f1476c up in Southbound
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.477 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:a5:e7 10.100.0.11'], port_security=['fa:16:3e:1b:a5:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '26741812-4ddf-457d-b571-7e2005b5133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '150e301c-4333-4419-97ed-4e455dd1f149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc13df43-1d01-44bd-8119-99eabe1edcf4, chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=92a5f80f-60e2-449d-9da8-ebaa31f1476c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.479 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 92a5f80f-60e2-449d-9da8-ebaa31f1476c in datapath 856aef2b-c9c5-4069-832f-1db92e31d6c2 bound to our chassis
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.481 166625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 856aef2b-c9c5-4069-832f-1db92e31d6c2
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.496 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[328774ab-d198-4019-b6b4-8bc85aa872f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.497 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap856aef2b-c1 in ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.500 261020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap856aef2b-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.500 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[2df5a628-0b13-419f-aec0-34cfbec71930]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.501 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[6509d590-a711-4959-8abe-043d624b67b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.526 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[18b21bd6-e105-4653-b093-3ef5ccd98c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004b90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.541 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[d6356242-394c-4253-a3dd-f9744d42f22a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.573 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[457179e6-f35d-4bdf-b72e-17ba4cfe14ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 NetworkManager[48970]: <info>  [1769422294.5833] manager: (tap856aef2b-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.582 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[9fbb2846-45c8-47f3-963c-4f31a4575238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.618 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[be0037e5-59d6-4132-893d-fc3ab3431aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.622 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[450429ef-b553-4fc6-87af-d75bc653a69d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 NetworkManager[48970]: <info>  [1769422294.6442] device (tap856aef2b-c0): carrier: link connected
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.651 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[124d44d4-aa23-44fa-a2df-dcb43c781424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.653 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.669 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[b7846a81-c0ae-41f8-b2f3-4d1eeef397ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap856aef2b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:d3:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424830, 'reachable_time': 25005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267962, 'error': None, 'target': 'ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.686 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d2b46b-91f1-44c3-81e0-ed2e9aae115b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:d332'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424830, 'tstamp': 424830}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267978, 'error': None, 'target': 'ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.704 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[195306db-b693-4daa-97a3-857590e3004b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap856aef2b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:d3:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424830, 'reachable_time': 25005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267982, 'error': None, 'target': 'ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:34.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.739 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[573e5fd3-4a4b-492f-9ca9-571406ccfb1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.798 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422294.7976172, 26741812-4ddf-457d-b571-7e2005b5133d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.798 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] VM Started (Lifecycle Event)
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.803 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[6f96381c-0144-4168-a693-b4db8b38b3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.805 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap856aef2b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.805 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.805 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap856aef2b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.807 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 NetworkManager[48970]: <info>  [1769422294.8078] manager: (tap856aef2b-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 26 10:11:34 compute-0 kernel: tap856aef2b-c0: entered promiscuous mode
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.808 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.809 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap856aef2b-c0, col_values=(('external_ids', {'iface-id': 'dcac661c-085c-4e05-b3e8-715548b0fd7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.810 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 ovn_controller[155832]: 2026-01-26T10:11:34Z|00041|binding|INFO|Releasing lport dcac661c-085c-4e05-b3e8-715548b0fd7e from this chassis (sb_readonly=0)
Jan 26 10:11:34 compute-0 nova_compute[254880]: 2026-01-26 10:11:34.824 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.825 166625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/856aef2b-c9c5-4069-832f-1db92e31d6c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/856aef2b-c9c5-4069-832f-1db92e31d6c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.826 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[3545db79-a55f-4e1b-9121-a25dd871bd25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.827 166625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: global
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     log         /dev/log local0 debug
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     log-tag     haproxy-metadata-proxy-856aef2b-c9c5-4069-832f-1db92e31d6c2
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     user        root
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     group       root
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     maxconn     1024
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     pidfile     /var/lib/neutron/external/pids/856aef2b-c9c5-4069-832f-1db92e31d6c2.pid.haproxy
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     daemon
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: defaults
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     log global
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     mode http
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     option httplog
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     option dontlognull
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     option http-server-close
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     option forwardfor
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     retries                 3
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     timeout http-request    30s
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     timeout connect         30s
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     timeout client          32s
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     timeout server          32s
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     timeout http-keep-alive 30s
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: listen listener
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     bind 169.254.169.254:80
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:     http-request add-header X-OVN-Network-ID 856aef2b-c9c5-4069-832f-1db92e31d6c2
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 10:11:34 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:34.828 166625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'env', 'PROCESS_TAG=haproxy-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/856aef2b-c9c5-4069-832f-1db92e31d6c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 10:11:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:35 compute-0 ceph-mon[74456]: pgmap v880: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.125 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.131 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422294.798179, 26741812-4ddf-457d-b571-7e2005b5133d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.131 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] VM Paused (Lifecycle Event)
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.177 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.180 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.209 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:11:35 compute-0 podman[268021]: 2026-01-26 10:11:35.216748942 +0000 UTC m=+0.054954139 container create e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 10:11:35 compute-0 systemd[1]: Started libpod-conmon-e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6.scope.
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.264 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Instance 26741812-4ddf-457d-b571-7e2005b5133d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.265 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.266 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:11:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:11:35 compute-0 podman[268021]: 2026-01-26 10:11:35.189768867 +0000 UTC m=+0.027974084 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 10:11:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc56c0fa5845305781e311494871c58bd6083931411c754b6f88c3b9ccc4957/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 10:11:35 compute-0 podman[268021]: 2026-01-26 10:11:35.298703313 +0000 UTC m=+0.136908520 container init e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 10:11:35 compute-0 podman[268021]: 2026-01-26 10:11:35.304997716 +0000 UTC m=+0.143202913 container start e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 10:11:35 compute-0 neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2[268037]: [NOTICE]   (268041) : New worker (268043) forked
Jan 26 10:11:35 compute-0 neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2[268037]: [NOTICE]   (268041) : Loading success.
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.420 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.471 254884 DEBUG nova.compute.manager [req-f1462635-6528-4859-b7ff-8d23c92b7e86 req-6de210ed-3841-4816-ae34-2f9676e214f7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.472 254884 DEBUG oslo_concurrency.lockutils [req-f1462635-6528-4859-b7ff-8d23c92b7e86 req-6de210ed-3841-4816-ae34-2f9676e214f7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.473 254884 DEBUG oslo_concurrency.lockutils [req-f1462635-6528-4859-b7ff-8d23c92b7e86 req-6de210ed-3841-4816-ae34-2f9676e214f7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.473 254884 DEBUG oslo_concurrency.lockutils [req-f1462635-6528-4859-b7ff-8d23c92b7e86 req-6de210ed-3841-4816-ae34-2f9676e214f7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.473 254884 DEBUG nova.compute.manager [req-f1462635-6528-4859-b7ff-8d23c92b7e86 req-6de210ed-3841-4816-ae34-2f9676e214f7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Processing event network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.474 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.495 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422295.4844725, 26741812-4ddf-457d-b571-7e2005b5133d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.495 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] VM Resumed (Lifecycle Event)
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.515 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.527 254884 INFO nova.virt.libvirt.driver [-] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Instance spawned successfully.
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.527 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.545 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.550 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.553 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.554 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.555 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.555 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.556 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.556 254884 DEBUG nova.virt.libvirt.driver [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.586 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:11:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.627 254884 INFO nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Took 8.77 seconds to spawn the instance on the hypervisor.
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.628 254884 DEBUG nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:11:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:11:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2607926891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.698 254884 INFO nova.compute.manager [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Took 10.40 seconds to build instance.
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.728 254884 DEBUG oslo_concurrency.lockutils [None req-f6339496-2e46-49e1-83f7-3536a8db967c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:11:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296608185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.947 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.953 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.975 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:11:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:35 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.999 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:35.999 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.000 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.001 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.017 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.018 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.018 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 10:11:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:36.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:36 compute-0 ceph-mon[74456]: pgmap v881: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 26 10:11:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2607926891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1296608185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:36] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 26 10:11:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:36] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 26 10:11:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:36.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:11:36 compute-0 nova_compute[254880]: 2026-01-26 10:11:36.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:11:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:11:37.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.180 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.180 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.180 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.180 254884 DEBUG nova.objects.instance [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:11:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3663071606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:11:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.617 254884 DEBUG nova.compute.manager [req-dac948bc-6270-4cff-a95e-ecc3d0de837e req-09d4d621-80d6-4454-b3cf-9149ecde9d89 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.618 254884 DEBUG oslo_concurrency.lockutils [req-dac948bc-6270-4cff-a95e-ecc3d0de837e req-09d4d621-80d6-4454-b3cf-9149ecde9d89 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.618 254884 DEBUG oslo_concurrency.lockutils [req-dac948bc-6270-4cff-a95e-ecc3d0de837e req-09d4d621-80d6-4454-b3cf-9149ecde9d89 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.618 254884 DEBUG oslo_concurrency.lockutils [req-dac948bc-6270-4cff-a95e-ecc3d0de837e req-09d4d621-80d6-4454-b3cf-9149ecde9d89 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.618 254884 DEBUG nova.compute.manager [req-dac948bc-6270-4cff-a95e-ecc3d0de837e req-09d4d621-80d6-4454-b3cf-9149ecde9d89 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] No waiting events found dispatching network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:11:37 compute-0 nova_compute[254880]: 2026-01-26 10:11:37.618 254884 WARNING nova.compute.manager [req-dac948bc-6270-4cff-a95e-ecc3d0de837e req-09d4d621-80d6-4454-b3cf-9149ecde9d89 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received unexpected event network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c for instance with vm_state active and task_state None.
Jan 26 10:11:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:38 compute-0 nova_compute[254880]: 2026-01-26 10:11:38.198 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:38.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:38 compute-0 ceph-mon[74456]: pgmap v882: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:11:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:38.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:11:39 compute-0 nova_compute[254880]: 2026-01-26 10:11:39.657 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:40.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:11:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:40 compute-0 ceph-mon[74456]: pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:11:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:40.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:11:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:42.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:42.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:42 compute-0 ceph-mon[74456]: pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.870 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.909 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.910 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.910 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.910 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.911 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.911 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.911 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:11:42 compute-0 nova_compute[254880]: 2026-01-26 10:11:42.911 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:43 compute-0 nova_compute[254880]: 2026-01-26 10:11:43.201 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:11:43 compute-0 sudo[268083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:11:43 compute-0 sudo[268083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:11:43 compute-0 sudo[268083]: pam_unix(sudo:session): session closed for user root
Jan 26 10:11:43 compute-0 nova_compute[254880]: 2026-01-26 10:11:43.924 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:11:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:44 compute-0 ceph-mon[74456]: pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:11:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:44 compute-0 nova_compute[254880]: 2026-01-26 10:11:44.247 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:44 compute-0 NetworkManager[48970]: <info>  [1769422304.2479] manager: (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 26 10:11:44 compute-0 ovn_controller[155832]: 2026-01-26T10:11:44Z|00042|binding|INFO|Releasing lport dcac661c-085c-4e05-b3e8-715548b0fd7e from this chassis (sb_readonly=0)
Jan 26 10:11:44 compute-0 NetworkManager[48970]: <info>  [1769422304.2487] manager: (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 26 10:11:44 compute-0 ovn_controller[155832]: 2026-01-26T10:11:44Z|00043|binding|INFO|Releasing lport dcac661c-085c-4e05-b3e8-715548b0fd7e from this chassis (sb_readonly=0)
Jan 26 10:11:44 compute-0 nova_compute[254880]: 2026-01-26 10:11:44.256 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:44 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:44 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:44 compute-0 nova_compute[254880]: 2026-01-26 10:11:44.657 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:44.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:45 compute-0 nova_compute[254880]: 2026-01-26 10:11:45.113 254884 DEBUG nova.compute.manager [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-changed-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:11:45 compute-0 nova_compute[254880]: 2026-01-26 10:11:45.114 254884 DEBUG nova.compute.manager [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing instance network info cache due to event network-changed-92a5f80f-60e2-449d-9da8-ebaa31f1476c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:11:45 compute-0 nova_compute[254880]: 2026-01-26 10:11:45.114 254884 DEBUG oslo_concurrency.lockutils [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:11:45 compute-0 nova_compute[254880]: 2026-01-26 10:11:45.114 254884 DEBUG oslo_concurrency.lockutils [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:11:45 compute-0 nova_compute[254880]: 2026-01-26 10:11:45.114 254884 DEBUG nova.network.neutron [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing network info cache for port 92a5f80f-60e2-449d-9da8-ebaa31f1476c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:11:45 compute-0 podman[268113]: 2026-01-26 10:11:45.151093904 +0000 UTC m=+0.081631173 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 10:11:45 compute-0 sshd-session[268111]: Invalid user postgres from 157.245.76.178 port 55950
Jan 26 10:11:45 compute-0 sshd-session[268111]: Connection closed by invalid user postgres 157.245.76.178 port 55950 [preauth]
Jan 26 10:11:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:11:45 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:45 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:46.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:46 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:46] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 26 10:11:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:46] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 26 10:11:46 compute-0 ceph-mon[74456]: pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:11:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:46.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:47 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24001cd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:11:47.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:11:47 compute-0 nova_compute[254880]: 2026-01-26 10:11:47.298 254884 DEBUG nova.network.neutron [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updated VIF entry in instance network info cache for port 92a5f80f-60e2-449d-9da8-ebaa31f1476c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:11:47 compute-0 nova_compute[254880]: 2026-01-26 10:11:47.299 254884 DEBUG nova.network.neutron [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:11:47 compute-0 nova_compute[254880]: 2026-01-26 10:11:47.388 254884 DEBUG oslo_concurrency.lockutils [req-84e7c2f6-814a-4899-96c2-8e7445f8f678 req-a0d1b8bf-c856-46f4-ba86-eab3c5739936 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:11:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:11:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:48 compute-0 nova_compute[254880]: 2026-01-26 10:11:48.205 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 26 10:11:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:48.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 26 10:11:48 compute-0 ovn_controller[155832]: 2026-01-26T10:11:48Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:a5:e7 10.100.0.11
Jan 26 10:11:48 compute-0 ovn_controller[155832]: 2026-01-26T10:11:48Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:a5:e7 10.100.0.11
Jan 26 10:11:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:48 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:48 compute-0 ceph-mon[74456]: pgmap v887: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:11:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:11:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:48.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:11:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:11:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:11:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:11:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:11:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:11:49 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:49 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:11:49 compute-0 nova_compute[254880]: 2026-01-26 10:11:49.658 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:11:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:50.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:11:50 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:50 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:50.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:51 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:51 compute-0 ceph-mon[74456]: pgmap v888: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:11:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 10:11:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:52 compute-0 ceph-mon[74456]: pgmap v889: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 10:11:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:52.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:52 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:52.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:53 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:53 compute-0 nova_compute[254880]: 2026-01-26 10:11:53.207 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:11:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18004c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:54.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:54 compute-0 nova_compute[254880]: 2026-01-26 10:11:54.433 254884 INFO nova.compute.manager [None req-ab1fbbe3-191f-4f1d-863b-747e5c8eee59 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Get console output
Jan 26 10:11:54 compute-0 nova_compute[254880]: 2026-01-26 10:11:54.438 254884 INFO oslo.privsep.daemon [None req-ab1fbbe3-191f-4f1d-863b-747e5c8eee59 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpufs7kg0c/privsep.sock']
Jan 26 10:11:54 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:54 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:54 compute-0 nova_compute[254880]: 2026-01-26 10:11:54.661 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:54.695 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:11:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:54.696 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:11:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:11:54.697 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:11:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:54.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:55 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:55 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:55 compute-0 nova_compute[254880]: 2026-01-26 10:11:55.111 254884 INFO oslo.privsep.daemon [None req-ab1fbbe3-191f-4f1d-863b-747e5c8eee59 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Spawned new privsep daemon via rootwrap
Jan 26 10:11:55 compute-0 nova_compute[254880]: 2026-01-26 10:11:54.977 268147 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 10:11:55 compute-0 nova_compute[254880]: 2026-01-26 10:11:54.981 268147 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 10:11:55 compute-0 nova_compute[254880]: 2026-01-26 10:11:54.983 268147 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 10:11:55 compute-0 nova_compute[254880]: 2026-01-26 10:11:54.983 268147 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268147
Jan 26 10:11:55 compute-0 nova_compute[254880]: 2026-01-26 10:11:55.226 268147 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 10:11:55 compute-0 ceph-mon[74456]: pgmap v890: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:11:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:11:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:56.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:56 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:56] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:11:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:11:56] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 26 10:11:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:56.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:57 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:57 compute-0 ceph-mon[74456]: pgmap v891: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:11:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:11:57.146Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:11:57 compute-0 podman[268152]: 2026-01-26 10:11:57.173031075 +0000 UTC m=+0.100984276 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 10:11:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:11:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:58 compute-0 ceph-mon[74456]: pgmap v892: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:11:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 10:11:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3514844431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:11:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 10:11:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3514844431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:11:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:11:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:11:58.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:11:58 compute-0 nova_compute[254880]: 2026-01-26 10:11:58.266 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:11:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:11:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:58 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14002110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:11:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:11:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:11:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:11:59 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:11:59 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:11:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3514844431' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:11:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3514844431' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:11:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:11:59 compute-0 nova_compute[254880]: 2026-01-26 10:11:59.663 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:00.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:00 compute-0 ceph-mon[74456]: pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 10:12:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:12:00 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:00 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:00.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:01 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:01 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:12:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:02.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:02 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:02.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:03 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:03 compute-0 ceph-mon[74456]: pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:12:03 compute-0 nova_compute[254880]: 2026-01-26 10:12:03.269 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Jan 26 10:12:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:12:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:03 compute-0 sudo[268184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:12:03 compute-0 sudo[268184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:03 compute-0 sudo[268184]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:03 compute-0 sudo[268209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:12:03 compute-0 sudo[268209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:03 compute-0 sudo[268209]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:04 compute-0 sudo[268234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:12:04 compute-0 sudo[268234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:04 compute-0 ceph-mon[74456]: pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 1 op/s
Jan 26 10:12:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:04.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:04 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:04 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:04 compute-0 sudo[268234]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:04 compute-0 nova_compute[254880]: 2026-01-26 10:12:04.665 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:04.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:05 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:05 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:12:05 compute-0 nova_compute[254880]: 2026-01-26 10:12:05.806 254884 DEBUG oslo_concurrency.lockutils [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "interface-26741812-4ddf-457d-b571-7e2005b5133d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:12:05 compute-0 nova_compute[254880]: 2026-01-26 10:12:05.806 254884 DEBUG oslo_concurrency.lockutils [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "interface-26741812-4ddf-457d-b571-7e2005b5133d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:12:05 compute-0 nova_compute[254880]: 2026-01-26 10:12:05.807 254884 DEBUG nova.objects.instance [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'flavor' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:12:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000029s ======
Jan 26 10:12:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:06.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 26 10:12:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:06 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:06] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 26 10:12:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:06] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 26 10:12:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:07 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:07 compute-0 ceph-mon[74456]: pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:12:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:07.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:12:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:12:07 compute-0 nova_compute[254880]: 2026-01-26 10:12:07.963 254884 DEBUG nova.objects.instance [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:12:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:12:08 compute-0 nova_compute[254880]: 2026-01-26 10:12:08.076 254884 DEBUG nova.network.neutron [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 10:12:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000088s ======
Jan 26 10:12:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:08.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000088s
Jan 26 10:12:08 compute-0 nova_compute[254880]: 2026-01-26 10:12:08.272 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:12:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:08 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:12:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:08.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:12:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:12:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:12:08 compute-0 sudo[268296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:12:08 compute-0 sudo[268296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:08 compute-0 sudo[268296]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:08 compute-0 sudo[268321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:12:08 compute-0 sudo[268321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:08 compute-0 nova_compute[254880]: 2026-01-26 10:12:08.996 254884 DEBUG nova.policy [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1208d3e25b940ea93fe76884c7a53db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 10:12:09 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:09 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:09 compute-0 podman[268388]: 2026-01-26 10:12:09.327068156 +0000 UTC m=+0.035503420 container create a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 26 10:12:09 compute-0 systemd[1]: Started libpod-conmon-a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5.scope.
Jan 26 10:12:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:12:09 compute-0 podman[268388]: 2026-01-26 10:12:09.311230797 +0000 UTC m=+0.019666081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:12:09 compute-0 podman[268388]: 2026-01-26 10:12:09.408583261 +0000 UTC m=+0.117018545 container init a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 10:12:09 compute-0 podman[268388]: 2026-01-26 10:12:09.415486 +0000 UTC m=+0.123921264 container start a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 26 10:12:09 compute-0 agitated_tharp[268405]: 167 167
Jan 26 10:12:09 compute-0 systemd[1]: libpod-a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5.scope: Deactivated successfully.
Jan 26 10:12:09 compute-0 podman[268388]: 2026-01-26 10:12:09.424078174 +0000 UTC m=+0.132513458 container attach a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:12:09 compute-0 podman[268388]: 2026-01-26 10:12:09.42442702 +0000 UTC m=+0.132862294 container died a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c28e5436e31172c28453b3b53f5545fcb03906a8f6380322c08b30a7c1dd63-merged.mount: Deactivated successfully.
Jan 26 10:12:09 compute-0 podman[268388]: 2026-01-26 10:12:09.4675789 +0000 UTC m=+0.176014164 container remove a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 10:12:09 compute-0 systemd[1]: libpod-conmon-a9fdd2f2be29ca84c3ac258e5a7bf73fa1ef26660e00875bbba8c90d36ab9ca5.scope: Deactivated successfully.
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:12:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:12:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:12:09 compute-0 podman[268427]: 2026-01-26 10:12:09.639448197 +0000 UTC m=+0.041141112 container create 4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gates, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 10:12:09 compute-0 nova_compute[254880]: 2026-01-26 10:12:09.667 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:09 compute-0 systemd[1]: Started libpod-conmon-4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9.scope.
Jan 26 10:12:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a5c16ba5b0093493ae1a75b1860a230b125384a6db596d0645836e42eb4f44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a5c16ba5b0093493ae1a75b1860a230b125384a6db596d0645836e42eb4f44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a5c16ba5b0093493ae1a75b1860a230b125384a6db596d0645836e42eb4f44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a5c16ba5b0093493ae1a75b1860a230b125384a6db596d0645836e42eb4f44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a5c16ba5b0093493ae1a75b1860a230b125384a6db596d0645836e42eb4f44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:09 compute-0 podman[268427]: 2026-01-26 10:12:09.71264516 +0000 UTC m=+0.114338095 container init 4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gates, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 10:12:09 compute-0 podman[268427]: 2026-01-26 10:12:09.623358287 +0000 UTC m=+0.025051232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:12:09 compute-0 podman[268427]: 2026-01-26 10:12:09.72247944 +0000 UTC m=+0.124172365 container start 4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gates, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 10:12:09 compute-0 podman[268427]: 2026-01-26 10:12:09.72603898 +0000 UTC m=+0.127731905 container attach 4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 26 10:12:10 compute-0 angry_gates[268444]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:12:10 compute-0 angry_gates[268444]: --> All data devices are unavailable
Jan 26 10:12:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:10 compute-0 systemd[1]: libpod-4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9.scope: Deactivated successfully.
Jan 26 10:12:10 compute-0 podman[268427]: 2026-01-26 10:12:10.050665498 +0000 UTC m=+0.452358423 container died 4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gates, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a5c16ba5b0093493ae1a75b1860a230b125384a6db596d0645836e42eb4f44-merged.mount: Deactivated successfully.
Jan 26 10:12:10 compute-0 podman[268427]: 2026-01-26 10:12:10.103278022 +0000 UTC m=+0.504970937 container remove 4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:12:10 compute-0 systemd[1]: libpod-conmon-4e451f0b0a13fa4fbe77757e1e5b7a29e073edceb62963684408b6b7010d63b9.scope: Deactivated successfully.
Jan 26 10:12:10 compute-0 sudo[268321]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:10 compute-0 sudo[268470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:12:10 compute-0 sudo[268470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:10 compute-0 sudo[268470]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:10.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:10 compute-0 sudo[268495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:12:10 compute-0 sudo[268495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:12:10 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:10 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:10 compute-0 ceph-mon[74456]: pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 1 op/s
Jan 26 10:12:10 compute-0 podman[268561]: 2026-01-26 10:12:10.662685072 +0000 UTC m=+0.039842764 container create 907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_shockley, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:12:10 compute-0 systemd[1]: Started libpod-conmon-907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58.scope.
Jan 26 10:12:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:12:10 compute-0 podman[268561]: 2026-01-26 10:12:10.738288233 +0000 UTC m=+0.115445945 container init 907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_shockley, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:12:10 compute-0 podman[268561]: 2026-01-26 10:12:10.64476476 +0000 UTC m=+0.021922462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:12:10 compute-0 podman[268561]: 2026-01-26 10:12:10.745525077 +0000 UTC m=+0.122682769 container start 907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:12:10 compute-0 podman[268561]: 2026-01-26 10:12:10.748908227 +0000 UTC m=+0.126065939 container attach 907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_shockley, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:12:10 compute-0 keen_shockley[268577]: 167 167
Jan 26 10:12:10 compute-0 systemd[1]: libpod-907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58.scope: Deactivated successfully.
Jan 26 10:12:10 compute-0 podman[268561]: 2026-01-26 10:12:10.751831038 +0000 UTC m=+0.128988730 container died 907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 10:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-44439fa5d7115bec976c1e7d398eb40cc9a6f099eab1892881b4f9afe7667c90-merged.mount: Deactivated successfully.
Jan 26 10:12:10 compute-0 podman[268561]: 2026-01-26 10:12:10.785667901 +0000 UTC m=+0.162825593 container remove 907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 10:12:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:10.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:10 compute-0 systemd[1]: libpod-conmon-907b2c68ad0c338eef9261226aee8139e19595cd8bfd56b44e6f87d8c6d8fe58.scope: Deactivated successfully.
Jan 26 10:12:10 compute-0 podman[268600]: 2026-01-26 10:12:10.952966294 +0000 UTC m=+0.046801084 container create ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shamir, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:12:10 compute-0 systemd[1]: Started libpod-conmon-ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c.scope.
Jan 26 10:12:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cc21acdbe1e238eec261cd96aa4de73a7940f757e4d528a9747fbe738db98d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cc21acdbe1e238eec261cd96aa4de73a7940f757e4d528a9747fbe738db98d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cc21acdbe1e238eec261cd96aa4de73a7940f757e4d528a9747fbe738db98d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cc21acdbe1e238eec261cd96aa4de73a7940f757e4d528a9747fbe738db98d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:11 compute-0 podman[268600]: 2026-01-26 10:12:11.0224275 +0000 UTC m=+0.116262310 container init ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 10:12:11 compute-0 podman[268600]: 2026-01-26 10:12:10.933219071 +0000 UTC m=+0.027053881 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:12:11 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:11 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:11 compute-0 podman[268600]: 2026-01-26 10:12:11.030975504 +0000 UTC m=+0.124810294 container start ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:12:11 compute-0 podman[268600]: 2026-01-26 10:12:11.034361105 +0000 UTC m=+0.128195975 container attach ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shamir, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:12:11 compute-0 nova_compute[254880]: 2026-01-26 10:12:11.081 254884 DEBUG nova.network.neutron [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Successfully created port: 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 10:12:11 compute-0 nice_shamir[268616]: {
Jan 26 10:12:11 compute-0 nice_shamir[268616]:     "0": [
Jan 26 10:12:11 compute-0 nice_shamir[268616]:         {
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "devices": [
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "/dev/loop3"
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             ],
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "lv_name": "ceph_lv0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "lv_size": "21470642176",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "name": "ceph_lv0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "tags": {
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.cluster_name": "ceph",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.crush_device_class": "",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.encrypted": "0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.osd_id": "0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.type": "block",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.vdo": "0",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:                 "ceph.with_tpm": "0"
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             },
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "type": "block",
Jan 26 10:12:11 compute-0 nice_shamir[268616]:             "vg_name": "ceph_vg0"
Jan 26 10:12:11 compute-0 nice_shamir[268616]:         }
Jan 26 10:12:11 compute-0 nice_shamir[268616]:     ]
Jan 26 10:12:11 compute-0 nice_shamir[268616]: }
Jan 26 10:12:11 compute-0 systemd[1]: libpod-ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c.scope: Deactivated successfully.
Jan 26 10:12:11 compute-0 podman[268625]: 2026-01-26 10:12:11.365153409 +0000 UTC m=+0.028054226 container died ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shamir, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 10:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-58cc21acdbe1e238eec261cd96aa4de73a7940f757e4d528a9747fbe738db98d-merged.mount: Deactivated successfully.
Jan 26 10:12:11 compute-0 podman[268625]: 2026-01-26 10:12:11.403800708 +0000 UTC m=+0.066701495 container remove ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:12:11 compute-0 systemd[1]: libpod-conmon-ec2f2c3f62fde404323635c1f20d3178b7d75432c9fe79f59a57f48c7cf4013c.scope: Deactivated successfully.
Jan 26 10:12:11 compute-0 sudo[268495]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:11 compute-0 sudo[268641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:12:11 compute-0 sudo[268641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:11 compute-0 sudo[268641]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:11 compute-0 sudo[268666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:12:11 compute-0 sudo[268666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 1 op/s
Jan 26 10:12:11 compute-0 podman[268730]: 2026-01-26 10:12:11.98651673 +0000 UTC m=+0.037570612 container create 49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:12:12 compute-0 nova_compute[254880]: 2026-01-26 10:12:12.017 254884 DEBUG nova.network.neutron [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Successfully updated port: 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 10:12:12 compute-0 systemd[1]: Started libpod-conmon-49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e.scope.
Jan 26 10:12:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:12:12 compute-0 podman[268730]: 2026-01-26 10:12:12.06701896 +0000 UTC m=+0.118072862 container init 49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cannon, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 10:12:12 compute-0 podman[268730]: 2026-01-26 10:12:11.970437011 +0000 UTC m=+0.021490913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:12:12 compute-0 podman[268730]: 2026-01-26 10:12:12.074950155 +0000 UTC m=+0.126004037 container start 49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 10:12:12 compute-0 podman[268730]: 2026-01-26 10:12:12.07840983 +0000 UTC m=+0.129463732 container attach 49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 10:12:12 compute-0 objective_cannon[268747]: 167 167
Jan 26 10:12:12 compute-0 systemd[1]: libpod-49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e.scope: Deactivated successfully.
Jan 26 10:12:12 compute-0 podman[268730]: 2026-01-26 10:12:12.079878136 +0000 UTC m=+0.130932018 container died 49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 10:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9c267d0d0da8c37db2979f8d0ca4a049d9dabab596fb9a8418a34b58a98cb44-merged.mount: Deactivated successfully.
Jan 26 10:12:12 compute-0 nova_compute[254880]: 2026-01-26 10:12:12.115 254884 DEBUG oslo_concurrency.lockutils [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:12:12 compute-0 nova_compute[254880]: 2026-01-26 10:12:12.115 254884 DEBUG oslo_concurrency.lockutils [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:12:12 compute-0 nova_compute[254880]: 2026-01-26 10:12:12.115 254884 DEBUG nova.network.neutron [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 10:12:12 compute-0 podman[268730]: 2026-01-26 10:12:12.118506223 +0000 UTC m=+0.169560105 container remove 49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:12:12 compute-0 nova_compute[254880]: 2026-01-26 10:12:12.119 254884 DEBUG nova.compute.manager [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-changed-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:12:12 compute-0 nova_compute[254880]: 2026-01-26 10:12:12.119 254884 DEBUG nova.compute.manager [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing instance network info cache due to event network-changed-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:12:12 compute-0 nova_compute[254880]: 2026-01-26 10:12:12.119 254884 DEBUG oslo_concurrency.lockutils [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:12:12 compute-0 systemd[1]: libpod-conmon-49c99d1b27c432c7ea10d6f4d2236e04e5cf16620639c0a34ef419e1cea1662e.scope: Deactivated successfully.
Jan 26 10:12:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:12.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:12 compute-0 podman[268770]: 2026-01-26 10:12:12.304498772 +0000 UTC m=+0.057035202 container create 26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Jan 26 10:12:12 compute-0 systemd[1]: Started libpod-conmon-26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869.scope.
Jan 26 10:12:12 compute-0 podman[268770]: 2026-01-26 10:12:12.275921674 +0000 UTC m=+0.028458124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:12:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2d361cf8fd01f9bf904181b07c078a7d90d467e4ad0c50e4dec188b23cf75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2d361cf8fd01f9bf904181b07c078a7d90d467e4ad0c50e4dec188b23cf75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2d361cf8fd01f9bf904181b07c078a7d90d467e4ad0c50e4dec188b23cf75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2d361cf8fd01f9bf904181b07c078a7d90d467e4ad0c50e4dec188b23cf75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:12 compute-0 podman[268770]: 2026-01-26 10:12:12.410957823 +0000 UTC m=+0.163494233 container init 26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:12:12 compute-0 podman[268770]: 2026-01-26 10:12:12.423505284 +0000 UTC m=+0.176041674 container start 26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:12:12 compute-0 podman[268770]: 2026-01-26 10:12:12.427292344 +0000 UTC m=+0.179828854 container attach 26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:12:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:12 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb10003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:12 compute-0 ceph-mon[74456]: pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 1 op/s
Jan 26 10:12:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:13 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:13 compute-0 lvm[268865]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:12:13 compute-0 lvm[268865]: VG ceph_vg0 finished
Jan 26 10:12:13 compute-0 cranky_montalcini[268787]: {}
Jan 26 10:12:13 compute-0 systemd[1]: libpod-26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869.scope: Deactivated successfully.
Jan 26 10:12:13 compute-0 systemd[1]: libpod-26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869.scope: Consumed 1.237s CPU time.
Jan 26 10:12:13 compute-0 podman[268770]: 2026-01-26 10:12:13.200477885 +0000 UTC m=+0.953014275 container died 26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-83e2d361cf8fd01f9bf904181b07c078a7d90d467e4ad0c50e4dec188b23cf75-merged.mount: Deactivated successfully.
Jan 26 10:12:13 compute-0 podman[268770]: 2026-01-26 10:12:13.247797941 +0000 UTC m=+1.000334321 container remove 26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:12:13 compute-0 systemd[1]: libpod-conmon-26dfb6491c246a6bc9c0b9e9a369cf37f5b31fbf158a046d8fd1d9ec41b1e869.scope: Deactivated successfully.
Jan 26 10:12:13 compute-0 nova_compute[254880]: 2026-01-26 10:12:13.277 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:13 compute-0 sudo[268666]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:12:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:12:13 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:13 compute-0 sudo[268880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:12:13 compute-0 sudo[268880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:13 compute-0 sudo[268880]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 26 10:12:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:14.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:14 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:14 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:12:14 compute-0 ceph-mon[74456]: pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.407 254884 DEBUG nova.network.neutron [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.430 254884 DEBUG oslo_concurrency.lockutils [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.431 254884 DEBUG oslo_concurrency.lockutils [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.431 254884 DEBUG nova.network.neutron [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing network info cache for port 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.436 254884 DEBUG nova.virt.libvirt.vif [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:11:35Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.436 254884 DEBUG nova.network.os_vif_util [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.437 254884 DEBUG nova.network.os_vif_util [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.437 254884 DEBUG os_vif [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.438 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.438 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.438 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.442 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.442 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a2a6f2c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.442 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a2a6f2c-40, col_values=(('external_ids', {'iface-id': '5a2a6f2c-40e2-42ce-9d76-e334db61eeb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:da:8f', 'vm-uuid': '26741812-4ddf-457d-b571-7e2005b5133d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.443 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 NetworkManager[48970]: <info>  [1769422334.4449] manager: (tap5a2a6f2c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.447 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.450 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.451 254884 INFO os_vif [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40')
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.451 254884 DEBUG nova.virt.libvirt.vif [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:11:35Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.452 254884 DEBUG nova.network.os_vif_util [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.452 254884 DEBUG nova.network.os_vif_util [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.455 254884 DEBUG nova.virt.libvirt.guest [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] attach device xml: <interface type="ethernet">
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <mac address="fa:16:3e:37:da:8f"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <model type="virtio"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <mtu size="1442"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <target dev="tap5a2a6f2c-40"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]: </interface>
Jan 26 10:12:14 compute-0 nova_compute[254880]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 10:12:14 compute-0 kernel: tap5a2a6f2c-40: entered promiscuous mode
Jan 26 10:12:14 compute-0 NetworkManager[48970]: <info>  [1769422334.4679] manager: (tap5a2a6f2c-40): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 26 10:12:14 compute-0 ovn_controller[155832]: 2026-01-26T10:12:14Z|00044|binding|INFO|Claiming lport 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 for this chassis.
Jan 26 10:12:14 compute-0 ovn_controller[155832]: 2026-01-26T10:12:14Z|00045|binding|INFO|5a2a6f2c-40e2-42ce-9d76-e334db61eeb8: Claiming fa:16:3e:37:da:8f 10.100.0.26
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.469 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 systemd-udevd[268868]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.479 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:da:8f 10.100.0.26'], port_security=['fa:16:3e:37:da:8f 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '26741812-4ddf-457d-b571-7e2005b5133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75a6a4cb-bd58-457c-b449-9db5f70f3f78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73bcc0f9-41ce-47a1-86a1-53fe1b73bb31, chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.481 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 in datapath ae1cb66c-0987-4156-9bdb-cb2a08957306 bound to our chassis
Jan 26 10:12:14 compute-0 NetworkManager[48970]: <info>  [1769422334.4837] device (tap5a2a6f2c-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 10:12:14 compute-0 NetworkManager[48970]: <info>  [1769422334.4845] device (tap5a2a6f2c-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.484 166625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ae1cb66c-0987-4156-9bdb-cb2a08957306
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.497 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[a2338634-2413-4cce-aa97-e7d16bae83a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.498 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapae1cb66c-01 in ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.499 261020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapae1cb66c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.499 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[7faffa82-7b5c-4851-9d7e-72289f221c01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.500 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[aa11bb50-1e40-44c4-9e47-03ad5be8d1cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.511 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[f5da5dcc-7766-46b4-9a95-efc5661197b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.526 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 ovn_controller[155832]: 2026-01-26T10:12:14Z|00046|binding|INFO|Setting lport 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 ovn-installed in OVS
Jan 26 10:12:14 compute-0 ovn_controller[155832]: 2026-01-26T10:12:14Z|00047|binding|INFO|Setting lport 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 up in Southbound
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.530 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.536 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1dc594-780b-40e6-bdd0-c6a68ea2ac40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.565 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[204e382c-4de6-43f2-8ba9-377efdc942e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.572 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbd04ae-8883-4256-be6f-31cdbef1249e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 NetworkManager[48970]: <info>  [1769422334.5731] manager: (tapae1cb66c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 26 10:12:14 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:14 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.601 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[220a7a32-2d0a-4940-93c7-28b4aa1b274f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.604 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[04784974-e999-4f54-a216-dfb9a7c6f6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 NetworkManager[48970]: <info>  [1769422334.6258] device (tapae1cb66c-00): carrier: link connected
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.629 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[4982595d-c61a-4639-94c9-1f7c742da179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.646 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7e3e41-6fcd-4de5-b8e4-26009385ff0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapae1cb66c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:97:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428828, 'reachable_time': 25805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268942, 'error': None, 'target': 'ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.660 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cad6bd-036d-4f57-a4fd-6edd6a71b56a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:9770'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428828, 'tstamp': 428828}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268943, 'error': None, 'target': 'ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.667 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.677 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[3cdabd8a-a7d8-45ba-9149-617a121abdf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapae1cb66c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:97:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428828, 'reachable_time': 25805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268944, 'error': None, 'target': 'ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.706 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[b25e0bea-3651-415c-8e1a-b52fa58b80cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.757 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[edcceec1-2ff0-4d3c-94c0-f3ef1b82f458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.758 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae1cb66c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.759 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.759 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae1cb66c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.760 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 NetworkManager[48970]: <info>  [1769422334.7616] manager: (tapae1cb66c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 26 10:12:14 compute-0 kernel: tapae1cb66c-00: entered promiscuous mode
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.763 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.764 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapae1cb66c-00, col_values=(('external_ids', {'iface-id': 'eff5217a-1c96-40b0-bc7b-e1d3937349a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.765 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 ovn_controller[155832]: 2026-01-26T10:12:14Z|00048|binding|INFO|Releasing lport eff5217a-1c96-40b0-bc7b-e1d3937349a6 from this chassis (sb_readonly=0)
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.779 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.780 166625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ae1cb66c-0987-4156-9bdb-cb2a08957306.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ae1cb66c-0987-4156-9bdb-cb2a08957306.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.781 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[9256ed23-4c02-4c67-8422-06165a0cdfa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.781 166625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: global
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     log         /dev/log local0 debug
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     log-tag     haproxy-metadata-proxy-ae1cb66c-0987-4156-9bdb-cb2a08957306
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     user        root
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     group       root
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     maxconn     1024
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     pidfile     /var/lib/neutron/external/pids/ae1cb66c-0987-4156-9bdb-cb2a08957306.pid.haproxy
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     daemon
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: defaults
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     log global
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     mode http
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     option httplog
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     option dontlognull
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     option http-server-close
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     option forwardfor
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     retries                 3
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     timeout http-request    30s
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     timeout connect         30s
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     timeout client          32s
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     timeout server          32s
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     timeout http-keep-alive 30s
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: listen listener
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     bind 169.254.169.254:80
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:     http-request add-header X-OVN-Network-ID ae1cb66c-0987-4156-9bdb-cb2a08957306
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 10:12:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:14.782 166625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'env', 'PROCESS_TAG=haproxy-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ae1cb66c-0987-4156-9bdb-cb2a08957306.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 10:12:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:14.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.868 254884 DEBUG nova.virt.libvirt.driver [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.868 254884 DEBUG nova.virt.libvirt.driver [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.868 254884 DEBUG nova.virt.libvirt.driver [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No VIF found with MAC fa:16:3e:1b:a5:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.869 254884 DEBUG nova.virt.libvirt.driver [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No VIF found with MAC fa:16:3e:37:da:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.889 254884 DEBUG nova.virt.libvirt.guest [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <nova:creationTime>2026-01-26 10:12:14</nova:creationTime>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <nova:flavor name="m1.nano">
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:memory>128</nova:memory>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:disk>1</nova:disk>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:swap>0</nova:swap>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:vcpus>1</nova:vcpus>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   </nova:flavor>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <nova:owner>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   </nova:owner>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   <nova:ports>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:12:14 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     <nova:port uuid="5a2a6f2c-40e2-42ce-9d76-e334db61eeb8">
Jan 26 10:12:14 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Jan 26 10:12:14 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:12:14 compute-0 nova_compute[254880]:   </nova:ports>
Jan 26 10:12:14 compute-0 nova_compute[254880]: </nova:instance>
Jan 26 10:12:14 compute-0 nova_compute[254880]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 10:12:14 compute-0 nova_compute[254880]: 2026-01-26 10:12:14.910 254884 DEBUG oslo_concurrency.lockutils [None req-3332a5ad-f593-42f9-8e82-93d64f458f8c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "interface-26741812-4ddf-457d-b571-7e2005b5133d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:12:15 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:15 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:15 compute-0 podman[268976]: 2026-01-26 10:12:15.135086511 +0000 UTC m=+0.049190141 container create 18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:12:15 compute-0 nova_compute[254880]: 2026-01-26 10:12:15.146 254884 DEBUG nova.compute.manager [req-049c2527-aa79-4395-a7af-459f2fbfb114 req-4b274c32-ae3b-4eb5-8950-cee53393723d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:12:15 compute-0 nova_compute[254880]: 2026-01-26 10:12:15.147 254884 DEBUG oslo_concurrency.lockutils [req-049c2527-aa79-4395-a7af-459f2fbfb114 req-4b274c32-ae3b-4eb5-8950-cee53393723d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:12:15 compute-0 nova_compute[254880]: 2026-01-26 10:12:15.147 254884 DEBUG oslo_concurrency.lockutils [req-049c2527-aa79-4395-a7af-459f2fbfb114 req-4b274c32-ae3b-4eb5-8950-cee53393723d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:12:15 compute-0 nova_compute[254880]: 2026-01-26 10:12:15.148 254884 DEBUG oslo_concurrency.lockutils [req-049c2527-aa79-4395-a7af-459f2fbfb114 req-4b274c32-ae3b-4eb5-8950-cee53393723d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:12:15 compute-0 nova_compute[254880]: 2026-01-26 10:12:15.148 254884 DEBUG nova.compute.manager [req-049c2527-aa79-4395-a7af-459f2fbfb114 req-4b274c32-ae3b-4eb5-8950-cee53393723d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] No waiting events found dispatching network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:12:15 compute-0 nova_compute[254880]: 2026-01-26 10:12:15.148 254884 WARNING nova.compute.manager [req-049c2527-aa79-4395-a7af-459f2fbfb114 req-4b274c32-ae3b-4eb5-8950-cee53393723d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received unexpected event network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 for instance with vm_state active and task_state None.
Jan 26 10:12:15 compute-0 systemd[1]: Started libpod-conmon-18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86.scope.
Jan 26 10:12:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:12:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8737b16b8e1692aac64eef54da4f3386039bd5e712accc6e0a5d3d90456f9e0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:15 compute-0 podman[268976]: 2026-01-26 10:12:15.109265056 +0000 UTC m=+0.023368706 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 10:12:15 compute-0 podman[268976]: 2026-01-26 10:12:15.20573922 +0000 UTC m=+0.119842850 container init 18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 10:12:15 compute-0 podman[268976]: 2026-01-26 10:12:15.211158683 +0000 UTC m=+0.125262313 container start 18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 10:12:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [NOTICE]   (269007) : New worker (269013) forked
Jan 26 10:12:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [NOTICE]   (269007) : Loading success.
Jan 26 10:12:15 compute-0 podman[268994]: 2026-01-26 10:12:15.247781301 +0000 UTC m=+0.059033211 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 10:12:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Jan 26 10:12:15 compute-0 ovn_controller[155832]: 2026-01-26T10:12:15Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:da:8f 10.100.0.26
Jan 26 10:12:15 compute-0 ovn_controller[155832]: 2026-01-26T10:12:15Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:da:8f 10.100.0.26
Jan 26 10:12:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:16 compute-0 nova_compute[254880]: 2026-01-26 10:12:16.093 254884 DEBUG nova.network.neutron [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updated VIF entry in instance network info cache for port 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:12:16 compute-0 nova_compute[254880]: 2026-01-26 10:12:16.093 254884 DEBUG nova.network.neutron [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:12:16 compute-0 nova_compute[254880]: 2026-01-26 10:12:16.112 254884 DEBUG oslo_concurrency.lockutils [req-b00149cc-4ec4-4e60-a2f6-4fd6af13f4e5 req-e1bd9573-15d3-457c-a989-1a2283b6bb5a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:12:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:16.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:16 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:16] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 26 10:12:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:16] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 26 10:12:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:16.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:17 compute-0 ceph-mon[74456]: pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Jan 26 10:12:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:17 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:17.148Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:12:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:17.148Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:12:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:17.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:12:17 compute-0 nova_compute[254880]: 2026-01-26 10:12:17.264 254884 DEBUG nova.compute.manager [req-78815b18-fbbb-4a6d-9b2f-d95a260c581d req-99e7ad44-e921-4941-b098-cd055f2eab2f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:12:17 compute-0 nova_compute[254880]: 2026-01-26 10:12:17.265 254884 DEBUG oslo_concurrency.lockutils [req-78815b18-fbbb-4a6d-9b2f-d95a260c581d req-99e7ad44-e921-4941-b098-cd055f2eab2f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:12:17 compute-0 nova_compute[254880]: 2026-01-26 10:12:17.265 254884 DEBUG oslo_concurrency.lockutils [req-78815b18-fbbb-4a6d-9b2f-d95a260c581d req-99e7ad44-e921-4941-b098-cd055f2eab2f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:12:17 compute-0 nova_compute[254880]: 2026-01-26 10:12:17.265 254884 DEBUG oslo_concurrency.lockutils [req-78815b18-fbbb-4a6d-9b2f-d95a260c581d req-99e7ad44-e921-4941-b098-cd055f2eab2f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:12:17 compute-0 nova_compute[254880]: 2026-01-26 10:12:17.265 254884 DEBUG nova.compute.manager [req-78815b18-fbbb-4a6d-9b2f-d95a260c581d req-99e7ad44-e921-4941-b098-cd055f2eab2f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] No waiting events found dispatching network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:12:17 compute-0 nova_compute[254880]: 2026-01-26 10:12:17.266 254884 WARNING nova.compute.manager [req-78815b18-fbbb-4a6d-9b2f-d95a260c581d req-99e7ad44-e921-4941-b098-cd055f2eab2f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received unexpected event network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 for instance with vm_state active and task_state None.
Jan 26 10:12:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 26 10:12:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:18 compute-0 ceph-mon[74456]: pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 26 10:12:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:18 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24002ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:12:18
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.control', '.mgr', 'backups', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs']
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:12:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:12:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:18.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:12:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:12:19 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:19 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595910049163248 of space, bias 1.0, pg target 0.22787730147489746 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:12:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:19 compute-0 nova_compute[254880]: 2026-01-26 10:12:19.465 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 26 10:12:19 compute-0 nova_compute[254880]: 2026-01-26 10:12:19.670 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:20 compute-0 nova_compute[254880]: 2026-01-26 10:12:20.083 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:20 compute-0 nova_compute[254880]: 2026-01-26 10:12:20.123 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Triggering sync for uuid 26741812-4ddf-457d-b571-7e2005b5133d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 10:12:20 compute-0 nova_compute[254880]: 2026-01-26 10:12:20.124 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:12:20 compute-0 nova_compute[254880]: 2026-01-26 10:12:20.124 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "26741812-4ddf-457d-b571-7e2005b5133d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:12:20 compute-0 nova_compute[254880]: 2026-01-26 10:12:20.167 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "26741812-4ddf-457d-b571-7e2005b5133d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:12:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:20.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:20 compute-0 ceph-mon[74456]: pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 26 10:12:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:12:20 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:20 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:20.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:21 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:21 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24003010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:21 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:21.617 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:12:21 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:21.618 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:12:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 7.0 KiB/s wr, 46 op/s
Jan 26 10:12:21 compute-0 nova_compute[254880]: 2026-01-26 10:12:21.655 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:22.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:22 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:22 compute-0 ceph-mon[74456]: pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 7.0 KiB/s wr, 46 op/s
Jan 26 10:12:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:22.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:23 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24003030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Jan 26 10:12:23 compute-0 sudo[269034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:12:23 compute-0 sudo[269034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:23 compute-0 sudo[269034]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:24.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:24 compute-0 nova_compute[254880]: 2026-01-26 10:12:24.467 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:24 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:24 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb34003d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:24 compute-0 nova_compute[254880]: 2026-01-26 10:12:24.672 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:24 compute-0 ceph-mon[74456]: pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 6.0 KiB/s wr, 46 op/s
Jan 26 10:12:25 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:25 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 99 op/s
Jan 26 10:12:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24003050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:26.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:26 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:26.621 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:12:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:26] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 26 10:12:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:26] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Jan 26 10:12:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:26.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:26 compute-0 ceph-mon[74456]: pgmap v906: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 99 op/s
Jan 26 10:12:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:27 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:27.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:12:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 6.0 KiB/s wr, 99 op/s
Jan 26 10:12:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/981306923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:27 compute-0 sshd-session[269064]: Invalid user postgres from 157.245.76.178 port 41458
Jan 26 10:12:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:28 compute-0 sshd-session[269064]: Connection closed by invalid user postgres 157.245.76.178 port 41458 [preauth]
Jan 26 10:12:28 compute-0 podman[269066]: 2026-01-26 10:12:28.095941971 +0000 UTC m=+0.094861403 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 26 10:12:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:28.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:28 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24003050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:28 compute-0 ceph-mon[74456]: pgmap v907: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 6.0 KiB/s wr, 99 op/s
Jan 26 10:12:29 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:29 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:29 compute-0 nova_compute[254880]: 2026-01-26 10:12:29.469 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 6.0 KiB/s wr, 99 op/s
Jan 26 10:12:29 compute-0 nova_compute[254880]: 2026-01-26 10:12:29.674 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:30 compute-0 ceph-mon[74456]: pgmap v908: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 6.0 KiB/s wr, 99 op/s
Jan 26 10:12:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:30.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:12:30 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:30 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:30.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:31 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:31 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24003070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 26 10:12:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:32 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:32 compute-0 ceph-mon[74456]: pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 26 10:12:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:32.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.997 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.997 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.997 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.997 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:12:32 compute-0 nova_compute[254880]: 2026-01-26 10:12:32.998 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:12:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:33 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:12:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103791736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:33 compute-0 nova_compute[254880]: 2026-01-26 10:12:33.426 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:12:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:33 compute-0 nova_compute[254880]: 2026-01-26 10:12:33.536 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:12:33 compute-0 nova_compute[254880]: 2026-01-26 10:12:33.537 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:12:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 26 10:12:33 compute-0 nova_compute[254880]: 2026-01-26 10:12:33.699 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:12:33 compute-0 nova_compute[254880]: 2026-01-26 10:12:33.701 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4361MB free_disk=59.921878814697266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:12:33 compute-0 nova_compute[254880]: 2026-01-26 10:12:33.701 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:12:33 compute-0 nova_compute[254880]: 2026-01-26 10:12:33.701 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:12:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:12:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1103791736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2082041347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.018 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Instance 26741812-4ddf-457d-b571-7e2005b5133d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.019 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.019 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.038 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing inventories for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 10:12:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24004bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.059 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating ProviderTree inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.060 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.079 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing aggregate associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.099 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing trait associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.145 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:12:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:34.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.481 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:12:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435654853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.583 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.593 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:12:34 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:34 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.631 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.676 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.725 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:12:34 compute-0 nova_compute[254880]: 2026-01-26 10:12:34.726 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:12:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:34.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:35 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:35 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 26 10:12:35 compute-0 ceph-mon[74456]: pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 26 10:12:35 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2307260624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:12:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2126813450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/435654853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3132034729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:12:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:36 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24004bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:36] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 26 10:12:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:36] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 26 10:12:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:36.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:36 compute-0 ceph-mon[74456]: pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 26 10:12:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3576426832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:37 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:37.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:12:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.726 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.726 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.727 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.727 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.989 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.989 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.989 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 10:12:37 compute-0 nova_compute[254880]: 2026-01-26 10:12:37.989 254884 DEBUG nova.objects.instance [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:12:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18002940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:38.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1035487032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:12:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:38 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000044s ======
Jan 26 10:12:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:38.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000044s
Jan 26 10:12:39 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:39 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24004bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:39 compute-0 nova_compute[254880]: 2026-01-26 10:12:39.484 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 10:12:39 compute-0 nova_compute[254880]: 2026-01-26 10:12:39.678 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:39 compute-0 ceph-mon[74456]: pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 10:12:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:40.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:12:40 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:40 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18002940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:40.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:41 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb14004ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 26 10:12:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb24004bd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:42 compute-0 ceph-mon[74456]: pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 10:12:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:42 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb380094f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 26 10:12:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:42.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[259415]: 26/01/2026 10:12:43 : epoch 69773c71 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ffb18002940 fd 48 proxy ignored for local
Jan 26 10:12:43 compute-0 kernel: ganesha.nfsd[269061]: segfault at 50 ip 00007ffbc433532e sp 00007ffb2cff8210 error 4 in libntirpc.so.5.8[7ffbc431a000+2c000] likely on CPU 1 (core 0, socket 1)
Jan 26 10:12:43 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 26 10:12:43 compute-0 systemd[1]: Started Process Core Dump (PID 269156/UID 0).
Jan 26 10:12:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 23 KiB/s wr, 12 op/s
Jan 26 10:12:44 compute-0 sudo[269158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:12:44 compute-0 sudo[269158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:12:44 compute-0 sudo[269158]: pam_unix(sudo:session): session closed for user root
Jan 26 10:12:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:44.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:44 compute-0 nova_compute[254880]: 2026-01-26 10:12:44.487 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:44 compute-0 nova_compute[254880]: 2026-01-26 10:12:44.679 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:44 compute-0 ceph-mon[74456]: pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 26 10:12:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:44.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 23 KiB/s wr, 66 op/s
Jan 26 10:12:45 compute-0 systemd-coredump[269157]: Process 259419 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 80:
                                                    #0  0x00007ffbc433532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    #1  0x0000000000000000 n/a (n/a + 0x0)
                                                    #2  0x00007ffbc433f900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                    ELF object binary architecture: AMD x86-64
Jan 26 10:12:45 compute-0 ceph-mon[74456]: pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 23 KiB/s wr, 12 op/s
Jan 26 10:12:45 compute-0 systemd[1]: systemd-coredump@14-269156-0.service: Deactivated successfully.
Jan 26 10:12:45 compute-0 systemd[1]: systemd-coredump@14-269156-0.service: Consumed 1.131s CPU time.
Jan 26 10:12:45 compute-0 podman[269191]: 2026-01-26 10:12:45.891531568 +0000 UTC m=+0.023967115 container died a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 10:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b91c4f93b2615a94999620bcba1571d10e7bca37e9b3445451c64042770ccc8-merged.mount: Deactivated successfully.
Jan 26 10:12:45 compute-0 podman[269190]: 2026-01-26 10:12:45.939860999 +0000 UTC m=+0.067131485 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:12:45 compute-0 podman[269191]: 2026-01-26 10:12:45.945805102 +0000 UTC m=+0.078240649 container remove a0a85c01ab015d054cdde2983b0776ad331e5ff996efcf13e612a1a97d7b7fcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:12:45 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Main process exited, code=exited, status=139/n/a
Jan 26 10:12:46 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Failed with result 'exit-code'.
Jan 26 10:12:46 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 2.082s CPU time.
Jan 26 10:12:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:46.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:46] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 26 10:12:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:46] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 26 10:12:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:12:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:46.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:12:46 compute-0 ceph-mon[74456]: pgmap v916: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 23 KiB/s wr, 66 op/s
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.111 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.132 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.133 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.133 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.133 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.133 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.134 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:12:47 compute-0 nova_compute[254880]: 2026-01-26 10:12:47.134 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:12:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:47.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:12:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Jan 26 10:12:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:12:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:48.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:12:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:12:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:12:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:12:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:12:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:12:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:12:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:12:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:48.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:48 compute-0 ceph-mon[74456]: pgmap v917: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Jan 26 10:12:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:12:49 compute-0 nova_compute[254880]: 2026-01-26 10:12:49.489 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Jan 26 10:12:49 compute-0 nova_compute[254880]: 2026-01-26 10:12:49.683 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:50 compute-0 ceph-mon[74456]: pgmap v918: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 64 op/s
Jan 26 10:12:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:50.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [WARNING] 025/101251 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 26 10:12:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze[98502]: [ALERT] 025/101251 (4) : backend 'backend' has no server available!
Jan 26 10:12:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 26 10:12:52 compute-0 ceph-mon[74456]: pgmap v919: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 26 10:12:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:52.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:12:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:52.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:12:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:12:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:54.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:54 compute-0 nova_compute[254880]: 2026-01-26 10:12:54.492 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:54.696 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:12:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:54.697 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:12:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:12:54.698 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:12:54 compute-0 nova_compute[254880]: 2026-01-26 10:12:54.733 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:12:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:54.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:12:54 compute-0 ceph-mon[74456]: pgmap v920: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:12:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 65 op/s
Jan 26 10:12:56 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Scheduled restart job, restart counter is at 15.
Jan 26 10:12:56 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:12:56 compute-0 systemd[1]: ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30@nfs.cephfs.2.0.compute-0.zfynkw.service: Consumed 2.082s CPU time.
Jan 26 10:12:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:12:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:56.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:12:56 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30...
Jan 26 10:12:56 compute-0 podman[269313]: 2026-01-26 10:12:56.505950336 +0000 UTC m=+0.040963110 container create 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738deaa557fc5ebddbab4bc3c2fe0d188dbd67f73a5cc32127c608c4dfa50fb4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738deaa557fc5ebddbab4bc3c2fe0d188dbd67f73a5cc32127c608c4dfa50fb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738deaa557fc5ebddbab4bc3c2fe0d188dbd67f73a5cc32127c608c4dfa50fb4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/738deaa557fc5ebddbab4bc3c2fe0d188dbd67f73a5cc32127c608c4dfa50fb4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.zfynkw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:12:56 compute-0 podman[269313]: 2026-01-26 10:12:56.561759231 +0000 UTC m=+0.096772035 container init 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:12:56 compute-0 podman[269313]: 2026-01-26 10:12:56.568922777 +0000 UTC m=+0.103935551 container start 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:12:56 compute-0 bash[269313]: 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31
Jan 26 10:12:56 compute-0 podman[269313]: 2026-01-26 10:12:56.489463045 +0000 UTC m=+0.024475849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 26 10:12:56 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.zfynkw for 1a70b85d-e3fd-5814-8a6a-37ea00fcae30.
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:56] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Jan 26 10:12:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:12:56] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Jan 26 10:12:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:12:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:12:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:12:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:56.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:12:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:12:57.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:12:57 compute-0 ceph-mon[74456]: pgmap v921: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 65 op/s
Jan 26 10:12:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.3 KiB/s wr, 11 op/s
Jan 26 10:12:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:12:58.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:12:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:12:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:12:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:12:58.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:12:59 compute-0 podman[269372]: 2026-01-26 10:12:59.154744579 +0000 UTC m=+0.082809234 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:12:59 compute-0 nova_compute[254880]: 2026-01-26 10:12:59.503 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:12:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.3 KiB/s wr, 11 op/s
Jan 26 10:12:59 compute-0 nova_compute[254880]: 2026-01-26 10:12:59.734 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:00 compute-0 ovn_controller[155832]: 2026-01-26T10:13:00Z|00049|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 26 10:13:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:00.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:00.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:01 compute-0 ceph-mon[74456]: pgmap v922: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.3 KiB/s wr, 11 op/s
Jan 26 10:13:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1241123311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:13:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1241123311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:13:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 193 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 26 10:13:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:02.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:02 compute-0 ceph-mon[74456]: pgmap v923: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 314 KiB/s rd, 2.3 KiB/s wr, 11 op/s
Jan 26 10:13:02 compute-0 ceph-mon[74456]: pgmap v924: 353 pgs: 353 active+clean; 193 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 26 10:13:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:02.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 193 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:13:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:13:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:04 compute-0 sudo[269403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:13:04 compute-0 sudo[269403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:04 compute-0 sudo[269403]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:04.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:04 compute-0 nova_compute[254880]: 2026-01-26 10:13:04.506 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:04 compute-0 nova_compute[254880]: 2026-01-26 10:13:04.737 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:04.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:04 compute-0 ceph-mon[74456]: pgmap v925: 353 pgs: 353 active+clean; 193 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:13:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 26 10:13:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:06 compute-0 nova_compute[254880]: 2026-01-26 10:13:06.609 254884 DEBUG nova.compute.manager [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-changed-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:06 compute-0 nova_compute[254880]: 2026-01-26 10:13:06.610 254884 DEBUG nova.compute.manager [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing instance network info cache due to event network-changed-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:13:06 compute-0 nova_compute[254880]: 2026-01-26 10:13:06.610 254884 DEBUG oslo_concurrency.lockutils [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:13:06 compute-0 nova_compute[254880]: 2026-01-26 10:13:06.610 254884 DEBUG oslo_concurrency.lockutils [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:13:06 compute-0 nova_compute[254880]: 2026-01-26 10:13:06.611 254884 DEBUG nova.network.neutron [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing network info cache for port 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:13:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:06] "GET /metrics HTTP/1.1" 200 48405 "" "Prometheus/2.51.0"
Jan 26 10:13:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:06] "GET /metrics HTTP/1.1" 200 48405 "" "Prometheus/2.51.0"
Jan 26 10:13:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:06.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:06 compute-0 ceph-mon[74456]: pgmap v926: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 26 10:13:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:07.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:13:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:07.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:13:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 26 10:13:07 compute-0 nova_compute[254880]: 2026-01-26 10:13:07.760 254884 DEBUG nova.network.neutron [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updated VIF entry in instance network info cache for port 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:13:07 compute-0 nova_compute[254880]: 2026-01-26 10:13:07.761 254884 DEBUG nova.network.neutron [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:13:07 compute-0 nova_compute[254880]: 2026-01-26 10:13:07.779 254884 DEBUG oslo_concurrency.lockutils [req-67f4d966-ce2c-45e6-b658-0360025d56f1 req-16657152-d455-4ac7-aeca-156a44a3017d b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:13:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.640677) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422388640705, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1539, "num_deletes": 255, "total_data_size": 2984348, "memory_usage": 3014864, "flush_reason": "Manual Compaction"}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422388655437, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2876864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26887, "largest_seqno": 28425, "table_properties": {"data_size": 2869763, "index_size": 4108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14635, "raw_average_key_size": 19, "raw_value_size": 2855521, "raw_average_value_size": 3807, "num_data_blocks": 180, "num_entries": 750, "num_filter_entries": 750, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422247, "oldest_key_time": 1769422247, "file_creation_time": 1769422388, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14812 microseconds, and 5756 cpu microseconds.
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.655485) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2876864 bytes OK
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.655506) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.657110) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.657124) EVENT_LOG_v1 {"time_micros": 1769422388657119, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.657141) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2977796, prev total WAL file size 2977796, number of live WAL files 2.
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.658091) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2809KB)], [59(13MB)]
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422388658122, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17430579, "oldest_snapshot_seqno": -1}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6042 keys, 17284411 bytes, temperature: kUnknown
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422388742359, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17284411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17240432, "index_size": 27741, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 153635, "raw_average_key_size": 25, "raw_value_size": 17127933, "raw_average_value_size": 2834, "num_data_blocks": 1138, "num_entries": 6042, "num_filter_entries": 6042, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422388, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.742632) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17284411 bytes
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.743903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.7 rd, 204.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.9 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 6570, records dropped: 528 output_compression: NoCompression
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.743918) EVENT_LOG_v1 {"time_micros": 1769422388743911, "job": 32, "event": "compaction_finished", "compaction_time_micros": 84336, "compaction_time_cpu_micros": 38268, "output_level": 6, "num_output_files": 1, "total_output_size": 17284411, "num_input_records": 6570, "num_output_records": 6042, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422388744530, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422388746846, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.658021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.746910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.746916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.746919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.746921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:13:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:13:08.746923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:13:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:08.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:08 compute-0 ceph-mon[74456]: pgmap v927: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 26 10:13:09 compute-0 sshd-session[269434]: Invalid user postgres from 157.245.76.178 port 40976
Jan 26 10:13:09 compute-0 sshd-session[269434]: Connection closed by invalid user postgres 157.245.76.178 port 40976 [preauth]
Jan 26 10:13:09 compute-0 nova_compute[254880]: 2026-01-26 10:13:09.545 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 26 10:13:09 compute-0 nova_compute[254880]: 2026-01-26 10:13:09.739 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:10.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:10.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:10 compute-0 ceph-mon[74456]: pgmap v928: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 26 10:13:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 10:13:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:12.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:12.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:13 compute-0 ceph-mon[74456]: pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 10:13:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 96 KiB/s wr, 64 op/s
Jan 26 10:13:13 compute-0 sudo[269440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:13:13 compute-0 sudo[269440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:13 compute-0 sudo[269440]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:13 compute-0 sudo[269465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:13:13 compute-0 sudo[269465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 96 KiB/s wr, 64 op/s
Jan 26 10:13:14 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1814321471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:14.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:14 compute-0 sudo[269465]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:13:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:13:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:13:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:13:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:13:14 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:13:14 compute-0 nova_compute[254880]: 2026-01-26 10:13:14.546 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:14 compute-0 sudo[269525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:13:14 compute-0 sudo[269525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:14 compute-0 sudo[269525]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:14 compute-0 sudo[269550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:13:14 compute-0 sudo[269550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:14 compute-0 nova_compute[254880]: 2026-01-26 10:13:14.742 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:14.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:13:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:13:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:13:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:13:15 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:13:15 compute-0 podman[269618]: 2026-01-26 10:13:15.106546602 +0000 UTC m=+0.062708183 container create 900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_diffie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 10:13:15 compute-0 systemd[1]: Started libpod-conmon-900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b.scope.
Jan 26 10:13:15 compute-0 podman[269618]: 2026-01-26 10:13:15.079555764 +0000 UTC m=+0.035717415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:13:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:13:15 compute-0 podman[269618]: 2026-01-26 10:13:15.209340078 +0000 UTC m=+0.165501679 container init 900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_diffie, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:13:15 compute-0 podman[269618]: 2026-01-26 10:13:15.225424927 +0000 UTC m=+0.181586498 container start 900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_diffie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 10:13:15 compute-0 podman[269618]: 2026-01-26 10:13:15.229433796 +0000 UTC m=+0.185595387 container attach 900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_diffie, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:13:15 compute-0 dazzling_diffie[269634]: 167 167
Jan 26 10:13:15 compute-0 systemd[1]: libpod-900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b.scope: Deactivated successfully.
Jan 26 10:13:15 compute-0 conmon[269634]: conmon 900b348048bddf8c8770 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b.scope/container/memory.events
Jan 26 10:13:15 compute-0 podman[269618]: 2026-01-26 10:13:15.234589017 +0000 UTC m=+0.190750588 container died 900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_diffie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f763994b37c49264b812079cde4347ddf3545cd38a7400a094a482b03c544e-merged.mount: Deactivated successfully.
Jan 26 10:13:15 compute-0 podman[269618]: 2026-01-26 10:13:15.281689933 +0000 UTC m=+0.237851504 container remove 900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 10:13:15 compute-0 systemd[1]: libpod-conmon-900b348048bddf8c87709b5f3d50d27aa4b7ce7fae4faa0c9f0cb9772c05e52b.scope: Deactivated successfully.
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.386 254884 DEBUG oslo_concurrency.lockutils [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "interface-26741812-4ddf-457d-b571-7e2005b5133d-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.388 254884 DEBUG oslo_concurrency.lockutils [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "interface-26741812-4ddf-457d-b571-7e2005b5133d-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.410 254884 DEBUG nova.objects.instance [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'flavor' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.430 254884 DEBUG nova.virt.libvirt.vif [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:11:35Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.430 254884 DEBUG nova.network.os_vif_util [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.431 254884 DEBUG nova.network.os_vif_util [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.436 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.438 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.440 254884 DEBUG nova.virt.libvirt.driver [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Attempting to detach device tap5a2a6f2c-40 from instance 26741812-4ddf-457d-b571-7e2005b5133d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.441 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] detach device xml: <interface type="ethernet">
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <mac address="fa:16:3e:37:da:8f"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <model type="virtio"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <mtu size="1442"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <target dev="tap5a2a6f2c-40"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]: </interface>
Jan 26 10:13:15 compute-0 nova_compute[254880]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.446 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.451 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <name>instance-00000006</name>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <uuid>26741812-4ddf-457d-b571-7e2005b5133d</uuid>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:creationTime>2026-01-26 10:12:14</nova:creationTime>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:flavor name="m1.nano">
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:memory>128</nova:memory>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:disk>1</nova:disk>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:swap>0</nova:swap>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:vcpus>1</nova:vcpus>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:flavor>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:owner>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:owner>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:ports>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:port uuid="5a2a6f2c-40e2-42ce-9d76-e334db61eeb8">
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:ports>
Jan 26 10:13:15 compute-0 nova_compute[254880]: </nova:instance>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <memory unit='KiB'>131072</memory>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <vcpu placement='static'>1</vcpu>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <resource>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <partition>/machine</partition>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </resource>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <sysinfo type='smbios'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <system>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='manufacturer'>RDO</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='serial'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='uuid'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='family'>Virtual Machine</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </system>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <os>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <boot dev='hd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <smbios mode='sysinfo'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </os>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <features>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <vmcoreinfo state='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </features>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <vendor>AMD</vendor>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='x2apic'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='hypervisor'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='stibp'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='ssbd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='overflow-recov'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='succor'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='ibrs'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='lbrv'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='pause-filter'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='xsaves'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='svm'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='topoext'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='npt'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='nrip-save'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <clock offset='utc'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <timer name='hpet' present='no'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <on_poweroff>destroy</on_poweroff>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <on_reboot>restart</on_reboot>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <on_crash>destroy</on_crash>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <disk type='network' device='disk'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk' index='2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target dev='vda' bus='virtio'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='virtio-disk0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <disk type='network' device='cdrom'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk.config' index='1'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target dev='sda' bus='sata'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <readonly/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='sata0-0-0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pcie.0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='1' port='0x10'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='2' port='0x11'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='3' port='0x12'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='4' port='0x13'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='5' port='0x14'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='6' port='0x15'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='7' port='0x16'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='8' port='0x17'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.8'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='9' port='0x18'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.9'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='10' port='0x19'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.10'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='11' port='0x1a'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.11'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='12' port='0x1b'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.12'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='13' port='0x1c'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.13'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='14' port='0x1d'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.14'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='15' port='0x1e'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.15'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='16' port='0x1f'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.16'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='17' port='0x20'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.17'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='18' port='0x21'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.18'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='19' port='0x22'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.19'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='20' port='0x23'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.20'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='21' port='0x24'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.21'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='22' port='0x25'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.22'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='23' port='0x26'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.23'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='24' port='0x27'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.24'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='25' port='0x28'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.25'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-pci-bridge'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.26'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='usb'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='sata' index='0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='ide'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <interface type='ethernet'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <mac address='fa:16:3e:1b:a5:e7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target dev='tap92a5f80f-60'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model type='virtio'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <mtu size='1442'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='net0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <interface type='ethernet'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <mac address='fa:16:3e:37:da:8f'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target dev='tap5a2a6f2c-40'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model type='virtio'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <mtu size='1442'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='net1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <serial type='pty'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target type='isa-serial' port='0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <model name='isa-serial'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </target>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target type='serial' port='0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </console>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <input type='tablet' bus='usb'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='input0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='usb' bus='0' port='1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <input type='mouse' bus='ps2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='input1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <input type='keyboard' bus='ps2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='input2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <listen type='address' address='::0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <audio id='1' type='none'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <video>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='video0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </video>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <watchdog model='itco' action='reset'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='watchdog0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </watchdog>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <memballoon model='virtio'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <stats period='10'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='balloon0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <rng model='virtio'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <backend model='random'>/dev/urandom</backend>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='rng0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <label>system_u:system_r:svirt_t:s0:c58,c762</label>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c58,c762</imagelabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <label>+107:+107</label>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <imagelabel>+107:+107</imagelabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]: </domain>
Jan 26 10:13:15 compute-0 nova_compute[254880]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.451 254884 INFO nova.virt.libvirt.driver [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully detached device tap5a2a6f2c-40 from instance 26741812-4ddf-457d-b571-7e2005b5133d from the persistent domain config.
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.451 254884 DEBUG nova.virt.libvirt.driver [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] (1/8): Attempting to detach device tap5a2a6f2c-40 with device alias net1 from instance 26741812-4ddf-457d-b571-7e2005b5133d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.452 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] detach device xml: <interface type="ethernet">
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <mac address="fa:16:3e:37:da:8f"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <model type="virtio"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <mtu size="1442"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <target dev="tap5a2a6f2c-40"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]: </interface>
Jan 26 10:13:15 compute-0 nova_compute[254880]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 10:13:15 compute-0 kernel: tap5a2a6f2c-40 (unregistering): left promiscuous mode
Jan 26 10:13:15 compute-0 NetworkManager[48970]: <info>  [1769422395.5138] device (tap5a2a6f2c-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.523 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:15 compute-0 ovn_controller[155832]: 2026-01-26T10:13:15Z|00050|binding|INFO|Releasing lport 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 from this chassis (sb_readonly=0)
Jan 26 10:13:15 compute-0 ovn_controller[155832]: 2026-01-26T10:13:15Z|00051|binding|INFO|Setting lport 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 down in Southbound
Jan 26 10:13:15 compute-0 ovn_controller[155832]: 2026-01-26T10:13:15Z|00052|binding|INFO|Removing iface tap5a2a6f2c-40 ovn-installed in OVS
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.526 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.527 254884 DEBUG nova.virt.libvirt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Received event <DeviceRemovedEvent: 1769422395.5276623, 26741812-4ddf-457d-b571-7e2005b5133d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 26 10:13:15 compute-0 podman[269658]: 2026-01-26 10:13:15.529452487 +0000 UTC m=+0.056691409 container create f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.530 254884 DEBUG nova.virt.libvirt.driver [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Start waiting for the detach event from libvirt for device tap5a2a6f2c-40 with device alias net1 for instance 26741812-4ddf-457d-b571-7e2005b5133d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.530 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.532 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:da:8f 10.100.0.26', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '26741812-4ddf-457d-b571-7e2005b5133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73bcc0f9-41ce-47a1-86a1-53fe1b73bb31, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.533 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 in datapath ae1cb66c-0987-4156-9bdb-cb2a08957306 unbound from our chassis
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.534 166625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ae1cb66c-0987-4156-9bdb-cb2a08957306, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.536 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3df8ab-10b8-48e5-bf4a-a7ad0a0c4945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.537 166625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306 namespace which is not needed anymore
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.538 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.539 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <name>instance-00000006</name>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <uuid>26741812-4ddf-457d-b571-7e2005b5133d</uuid>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:creationTime>2026-01-26 10:12:14</nova:creationTime>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:flavor name="m1.nano">
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:memory>128</nova:memory>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:disk>1</nova:disk>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:swap>0</nova:swap>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:vcpus>1</nova:vcpus>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:flavor>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:owner>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:owner>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:ports>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:port uuid="5a2a6f2c-40e2-42ce-9d76-e334db61eeb8">
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:ports>
Jan 26 10:13:15 compute-0 nova_compute[254880]: </nova:instance>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <memory unit='KiB'>131072</memory>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <vcpu placement='static'>1</vcpu>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <resource>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <partition>/machine</partition>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </resource>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <sysinfo type='smbios'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <system>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='manufacturer'>RDO</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='serial'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='uuid'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <entry name='family'>Virtual Machine</entry>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </system>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <os>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <boot dev='hd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <smbios mode='sysinfo'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </os>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <features>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <vmcoreinfo state='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </features>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <vendor>AMD</vendor>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='x2apic'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='hypervisor'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='stibp'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='ssbd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='overflow-recov'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='succor'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='ibrs'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='lbrv'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='pause-filter'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='xsaves'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='svm'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='require' name='topoext'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='npt'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <feature policy='disable' name='nrip-save'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <clock offset='utc'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <timer name='hpet' present='no'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <on_poweroff>destroy</on_poweroff>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <on_reboot>restart</on_reboot>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <on_crash>destroy</on_crash>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <disk type='network' device='disk'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk' index='2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target dev='vda' bus='virtio'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='virtio-disk0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <disk type='network' device='cdrom'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk.config' index='1'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target dev='sda' bus='sata'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <readonly/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='sata0-0-0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pcie.0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='1' port='0x10'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='2' port='0x11'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='3' port='0x12'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='4' port='0x13'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='5' port='0x14'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='6' port='0x15'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='7' port='0x16'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='8' port='0x17'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.8'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='9' port='0x18'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.9'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='10' port='0x19'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.10'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='11' port='0x1a'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.11'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='12' port='0x1b'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.12'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='13' port='0x1c'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.13'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='14' port='0x1d'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.14'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='15' port='0x1e'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.15'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='16' port='0x1f'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.16'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='17' port='0x20'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.17'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='18' port='0x21'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.18'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='19' port='0x22'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.19'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='20' port='0x23'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.20'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='21' port='0x24'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.21'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='22' port='0x25'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.22'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='23' port='0x26'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.23'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='24' port='0x27'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.24'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target chassis='25' port='0x28'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.25'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model name='pcie-pci-bridge'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='pci.26'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='usb'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <controller type='sata' index='0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='ide'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <interface type='ethernet'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <mac address='fa:16:3e:1b:a5:e7'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target dev='tap92a5f80f-60'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model type='virtio'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <mtu size='1442'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='net0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <serial type='pty'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target type='isa-serial' port='0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:         <model name='isa-serial'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       </target>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <target type='serial' port='0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </console>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <input type='tablet' bus='usb'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='input0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='usb' bus='0' port='1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <input type='mouse' bus='ps2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='input1'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <input type='keyboard' bus='ps2'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='input2'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <listen type='address' address='::0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <audio id='1' type='none'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <video>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='video0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </video>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <watchdog model='itco' action='reset'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='watchdog0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </watchdog>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <memballoon model='virtio'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <stats period='10'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='balloon0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <rng model='virtio'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <backend model='random'>/dev/urandom</backend>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <alias name='rng0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <label>system_u:system_r:svirt_t:s0:c58,c762</label>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c58,c762</imagelabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <label>+107:+107</label>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <imagelabel>+107:+107</imagelabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:15 compute-0 nova_compute[254880]: </domain>
Jan 26 10:13:15 compute-0 nova_compute[254880]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.539 254884 INFO nova.virt.libvirt.driver [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully detached device tap5a2a6f2c-40 from instance 26741812-4ddf-457d-b571-7e2005b5133d from the live domain config.
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.540 254884 DEBUG nova.virt.libvirt.vif [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:11:35Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.540 254884 DEBUG nova.network.os_vif_util [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.540 254884 DEBUG nova.network.os_vif_util [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.541 254884 DEBUG os_vif [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.542 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.543 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a2a6f2c-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.545 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.551 254884 INFO os_vif [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40')
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.552 254884 DEBUG nova.virt.libvirt.guest [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:creationTime>2026-01-26 10:13:15</nova:creationTime>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:flavor name="m1.nano">
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:memory>128</nova:memory>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:disk>1</nova:disk>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:swap>0</nova:swap>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:vcpus>1</nova:vcpus>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:flavor>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:owner>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:owner>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   <nova:ports>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:13:15 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:13:15 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:15 compute-0 nova_compute[254880]:   </nova:ports>
Jan 26 10:13:15 compute-0 nova_compute[254880]: </nova:instance>
Jan 26 10:13:15 compute-0 nova_compute[254880]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 10:13:15 compute-0 systemd[1]: Started libpod-conmon-f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9.scope.
Jan 26 10:13:15 compute-0 podman[269658]: 2026-01-26 10:13:15.498559434 +0000 UTC m=+0.025798386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:13:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42fe8ba14ef933579e6d92cfe6391570ca9e82b68022530b147817e61a3fb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42fe8ba14ef933579e6d92cfe6391570ca9e82b68022530b147817e61a3fb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42fe8ba14ef933579e6d92cfe6391570ca9e82b68022530b147817e61a3fb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42fe8ba14ef933579e6d92cfe6391570ca9e82b68022530b147817e61a3fb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba42fe8ba14ef933579e6d92cfe6391570ca9e82b68022530b147817e61a3fb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:15 compute-0 podman[269658]: 2026-01-26 10:13:15.639752598 +0000 UTC m=+0.166991540 container init f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hodgkin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:13:15 compute-0 podman[269658]: 2026-01-26 10:13:15.650118171 +0000 UTC m=+0.177357093 container start f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hodgkin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:13:15 compute-0 podman[269658]: 2026-01-26 10:13:15.653987547 +0000 UTC m=+0.181226469 container attach f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:13:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 98 KiB/s wr, 92 op/s
Jan 26 10:13:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [NOTICE]   (269007) : haproxy version is 2.8.14-c23fe91
Jan 26 10:13:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [NOTICE]   (269007) : path to executable is /usr/sbin/haproxy
Jan 26 10:13:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [WARNING]  (269007) : Exiting Master process...
Jan 26 10:13:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [WARNING]  (269007) : Exiting Master process...
Jan 26 10:13:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [ALERT]    (269007) : Current worker (269013) exited with code 143 (Terminated)
Jan 26 10:13:15 compute-0 neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306[268992]: [WARNING]  (269007) : All workers exited. Exiting... (0)
Jan 26 10:13:15 compute-0 systemd[1]: libpod-18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86.scope: Deactivated successfully.
Jan 26 10:13:15 compute-0 podman[269700]: 2026-01-26 10:13:15.682247678 +0000 UTC m=+0.049606796 container died 18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8737b16b8e1692aac64eef54da4f3386039bd5e712accc6e0a5d3d90456f9e0b-merged.mount: Deactivated successfully.
Jan 26 10:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86-userdata-shm.mount: Deactivated successfully.
Jan 26 10:13:15 compute-0 podman[269700]: 2026-01-26 10:13:15.721184601 +0000 UTC m=+0.088543709 container cleanup 18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:13:15 compute-0 systemd[1]: libpod-conmon-18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86.scope: Deactivated successfully.
Jan 26 10:13:15 compute-0 podman[269728]: 2026-01-26 10:13:15.793355471 +0000 UTC m=+0.048515235 container remove 18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.798 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e66d57-91c6-4ee5-8a0e-e051c902a213]: (4, ('Mon Jan 26 10:13:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306 (18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86)\n18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86\nMon Jan 26 10:13:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306 (18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86)\n18d8f90b9bf338f37a1b4ee8524f5d4120d8788bce24bac58b1a9083498e1c86\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.800 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a69d6f-e033-40cf-80e4-585d05999a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.801 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae1cb66c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.803 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:15 compute-0 kernel: tapae1cb66c-00: left promiscuous mode
Jan 26 10:13:15 compute-0 nova_compute[254880]: 2026-01-26 10:13:15.816 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.820 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4a4941-93e8-4d29-bfaa-5db711ae0c40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.838 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[c39990e4-c80f-4fe3-8999-61809d2b88c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.840 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[c16bbcff-74d8-425f-9a1b-a3e2010db77d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.860 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a4d964-17a7-4ef8-9ef4-7d9d859e1bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428821, 'reachable_time': 24005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269743, 'error': None, 'target': 'ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.862 167020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ae1cb66c-0987-4156-9bdb-cb2a08957306 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 10:13:15 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:15.862 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[82a0332f-c38e-4124-b0b5-c799a67d9697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:15 compute-0 systemd[1]: run-netns-ovnmeta\x2dae1cb66c\x2d0987\x2d4156\x2d9bdb\x2dcb2a08957306.mount: Deactivated successfully.
Jan 26 10:13:16 compute-0 elated_hodgkin[269689]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:13:16 compute-0 elated_hodgkin[269689]: --> All data devices are unavailable
Jan 26 10:13:16 compute-0 ceph-mon[74456]: pgmap v931: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 98 KiB/s wr, 92 op/s
Jan 26 10:13:16 compute-0 systemd[1]: libpod-f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9.scope: Deactivated successfully.
Jan 26 10:13:16 compute-0 podman[269658]: 2026-01-26 10:13:16.092737014 +0000 UTC m=+0.619975956 container died f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hodgkin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.146 254884 DEBUG nova.compute.manager [req-5ee097c4-f898-4bc2-b2a7-37c89dbd242e req-cafa943d-faef-4bfb-afc1-1836d10196a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-unplugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.147 254884 DEBUG oslo_concurrency.lockutils [req-5ee097c4-f898-4bc2-b2a7-37c89dbd242e req-cafa943d-faef-4bfb-afc1-1836d10196a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.147 254884 DEBUG oslo_concurrency.lockutils [req-5ee097c4-f898-4bc2-b2a7-37c89dbd242e req-cafa943d-faef-4bfb-afc1-1836d10196a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.147 254884 DEBUG oslo_concurrency.lockutils [req-5ee097c4-f898-4bc2-b2a7-37c89dbd242e req-cafa943d-faef-4bfb-afc1-1836d10196a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.147 254884 DEBUG nova.compute.manager [req-5ee097c4-f898-4bc2-b2a7-37c89dbd242e req-cafa943d-faef-4bfb-afc1-1836d10196a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] No waiting events found dispatching network-vif-unplugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.147 254884 WARNING nova.compute.manager [req-5ee097c4-f898-4bc2-b2a7-37c89dbd242e req-cafa943d-faef-4bfb-afc1-1836d10196a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received unexpected event network-vif-unplugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 for instance with vm_state active and task_state None.
Jan 26 10:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba42fe8ba14ef933579e6d92cfe6391570ca9e82b68022530b147817e61a3fb7-merged.mount: Deactivated successfully.
Jan 26 10:13:16 compute-0 podman[269658]: 2026-01-26 10:13:16.169372596 +0000 UTC m=+0.696611528 container remove f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:13:16 compute-0 systemd[1]: libpod-conmon-f83c586b88c8fb9c6688a1788e7e5be2d17e6aec286190d5200e60c1324c8dd9.scope: Deactivated successfully.
Jan 26 10:13:16 compute-0 podman[269754]: 2026-01-26 10:13:16.177721734 +0000 UTC m=+0.105619395 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:13:16 compute-0 sudo[269550]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:16 compute-0 sudo[269786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:13:16 compute-0 sudo[269786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:16 compute-0 sudo[269786]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:16.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:16 compute-0 sudo[269811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:13:16 compute-0 sudo[269811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.534 254884 DEBUG oslo_concurrency.lockutils [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.534 254884 DEBUG oslo_concurrency.lockutils [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.534 254884 DEBUG nova.network.neutron [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.597 254884 DEBUG nova.compute.manager [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-deleted-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.599 254884 INFO nova.compute.manager [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Neutron deleted interface 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8; detaching it from the instance and deleting it from the info cache
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.599 254884 DEBUG nova.network.neutron [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.627 254884 DEBUG nova.objects.instance [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lazy-loading 'system_metadata' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:13:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:16] "GET /metrics HTTP/1.1" 200 48405 "" "Prometheus/2.51.0"
Jan 26 10:13:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:16] "GET /metrics HTTP/1.1" 200 48405 "" "Prometheus/2.51.0"
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.650 254884 DEBUG nova.objects.instance [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lazy-loading 'flavor' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.672 254884 DEBUG nova.virt.libvirt.vif [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:11:35Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.672 254884 DEBUG nova.network.os_vif_util [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Converting VIF {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.673 254884 DEBUG nova.network.os_vif_util [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.678 254884 DEBUG nova.virt.libvirt.guest [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.682 254884 DEBUG nova.virt.libvirt.guest [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <name>instance-00000006</name>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <uuid>26741812-4ddf-457d-b571-7e2005b5133d</uuid>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:creationTime>2026-01-26 10:13:15</nova:creationTime>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:flavor name="m1.nano">
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:memory>128</nova:memory>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:disk>1</nova:disk>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:swap>0</nova:swap>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:vcpus>1</nova:vcpus>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:flavor>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:owner>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:owner>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:ports>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:ports>
Jan 26 10:13:16 compute-0 nova_compute[254880]: </nova:instance>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <memory unit='KiB'>131072</memory>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <vcpu placement='static'>1</vcpu>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <resource>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <partition>/machine</partition>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </resource>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <sysinfo type='smbios'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <system>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='manufacturer'>RDO</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='serial'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='uuid'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='family'>Virtual Machine</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </system>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <os>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <boot dev='hd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <smbios mode='sysinfo'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </os>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <features>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <vmcoreinfo state='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </features>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <vendor>AMD</vendor>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='x2apic'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='hypervisor'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='stibp'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='ssbd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='overflow-recov'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='succor'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='ibrs'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='lbrv'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='pause-filter'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='xsaves'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='svm'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='topoext'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='npt'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='nrip-save'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <clock offset='utc'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <timer name='hpet' present='no'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <on_poweroff>destroy</on_poweroff>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <on_reboot>restart</on_reboot>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <on_crash>destroy</on_crash>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <disk type='network' device='disk'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk' index='2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target dev='vda' bus='virtio'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='virtio-disk0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <disk type='network' device='cdrom'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk.config' index='1'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target dev='sda' bus='sata'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <readonly/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='sata0-0-0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pcie.0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='1' port='0x10'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='2' port='0x11'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='3' port='0x12'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='4' port='0x13'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='5' port='0x14'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='6' port='0x15'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='7' port='0x16'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='8' port='0x17'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.8'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='9' port='0x18'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.9'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='10' port='0x19'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.10'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='11' port='0x1a'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.11'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='12' port='0x1b'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.12'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='13' port='0x1c'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.13'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='14' port='0x1d'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.14'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='15' port='0x1e'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.15'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='16' port='0x1f'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.16'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='17' port='0x20'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.17'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='18' port='0x21'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.18'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='19' port='0x22'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.19'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='20' port='0x23'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.20'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='21' port='0x24'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.21'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='22' port='0x25'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.22'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='23' port='0x26'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.23'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='24' port='0x27'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.24'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='25' port='0x28'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.25'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-pci-bridge'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.26'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='usb'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='sata' index='0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='ide'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <interface type='ethernet'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <mac address='fa:16:3e:1b:a5:e7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target dev='tap92a5f80f-60'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model type='virtio'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <mtu size='1442'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='net0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <serial type='pty'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target type='isa-serial' port='0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <model name='isa-serial'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </target>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target type='serial' port='0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </console>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <input type='tablet' bus='usb'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='input0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='usb' bus='0' port='1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <input type='mouse' bus='ps2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='input1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <input type='keyboard' bus='ps2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='input2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <listen type='address' address='::0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <audio id='1' type='none'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <video>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='video0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </video>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <watchdog model='itco' action='reset'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='watchdog0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </watchdog>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <memballoon model='virtio'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <stats period='10'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='balloon0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <rng model='virtio'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <backend model='random'>/dev/urandom</backend>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='rng0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <label>system_u:system_r:svirt_t:s0:c58,c762</label>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c58,c762</imagelabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <label>+107:+107</label>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <imagelabel>+107:+107</imagelabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]: </domain>
Jan 26 10:13:16 compute-0 nova_compute[254880]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.683 254884 DEBUG nova.virt.libvirt.guest [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.686 254884 DEBUG nova.virt.libvirt.guest [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:37:da:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap5a2a6f2c-40"/></interface>not found in domain: <domain type='kvm' id='2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <name>instance-00000006</name>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <uuid>26741812-4ddf-457d-b571-7e2005b5133d</uuid>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:creationTime>2026-01-26 10:13:15</nova:creationTime>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:flavor name="m1.nano">
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:memory>128</nova:memory>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:disk>1</nova:disk>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:swap>0</nova:swap>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:vcpus>1</nova:vcpus>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:flavor>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:owner>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:owner>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:ports>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:ports>
Jan 26 10:13:16 compute-0 nova_compute[254880]: </nova:instance>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <memory unit='KiB'>131072</memory>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <vcpu placement='static'>1</vcpu>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <resource>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <partition>/machine</partition>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </resource>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <sysinfo type='smbios'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <system>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='manufacturer'>RDO</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='serial'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='uuid'>26741812-4ddf-457d-b571-7e2005b5133d</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <entry name='family'>Virtual Machine</entry>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </system>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <os>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <boot dev='hd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <smbios mode='sysinfo'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </os>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <features>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <vmcoreinfo state='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </features>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <vendor>AMD</vendor>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='x2apic'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='hypervisor'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='stibp'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='ssbd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='overflow-recov'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='succor'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='ibrs'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='lbrv'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='pause-filter'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='xsaves'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='svm'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='require' name='topoext'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='npt'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <feature policy='disable' name='nrip-save'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <clock offset='utc'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <timer name='hpet' present='no'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <on_poweroff>destroy</on_poweroff>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <on_reboot>restart</on_reboot>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <on_crash>destroy</on_crash>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <disk type='network' device='disk'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk' index='2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target dev='vda' bus='virtio'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='virtio-disk0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <disk type='network' device='cdrom'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <auth username='openstack'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <secret type='ceph' uuid='1a70b85d-e3fd-5814-8a6a-37ea00fcae30'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source protocol='rbd' name='vms/26741812-4ddf-457d-b571-7e2005b5133d_disk.config' index='1'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.100' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.102' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <host name='192.168.122.101' port='6789'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </source>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target dev='sda' bus='sata'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <readonly/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='sata0-0-0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pcie.0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='1' port='0x10'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='2' port='0x11'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='3' port='0x12'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='4' port='0x13'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='5' port='0x14'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='6' port='0x15'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='7' port='0x16'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='8' port='0x17'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.8'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='9' port='0x18'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.9'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='10' port='0x19'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.10'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='11' port='0x1a'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.11'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='12' port='0x1b'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.12'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='13' port='0x1c'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.13'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='14' port='0x1d'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.14'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='15' port='0x1e'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.15'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='16' port='0x1f'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.16'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='17' port='0x20'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.17'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='18' port='0x21'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.18'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='19' port='0x22'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.19'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='20' port='0x23'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.20'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='21' port='0x24'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.21'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='22' port='0x25'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.22'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='23' port='0x26'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.23'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='24' port='0x27'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.24'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-root-port'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target chassis='25' port='0x28'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.25'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model name='pcie-pci-bridge'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='pci.26'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='usb'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <controller type='sata' index='0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='ide'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </controller>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <interface type='ethernet'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <mac address='fa:16:3e:1b:a5:e7'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target dev='tap92a5f80f-60'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model type='virtio'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <mtu size='1442'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='net0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <serial type='pty'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target type='isa-serial' port='0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:         <model name='isa-serial'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       </target>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <source path='/dev/pts/0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <log file='/var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d/console.log' append='off'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <target type='serial' port='0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='serial0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </console>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <input type='tablet' bus='usb'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='input0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='usb' bus='0' port='1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <input type='mouse' bus='ps2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='input1'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <input type='keyboard' bus='ps2'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='input2'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </input>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <listen type='address' address='::0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </graphics>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <audio id='1' type='none'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <video>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='video0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </video>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <watchdog model='itco' action='reset'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='watchdog0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </watchdog>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <memballoon model='virtio'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <stats period='10'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='balloon0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <rng model='virtio'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <backend model='random'>/dev/urandom</backend>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <alias name='rng0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <label>system_u:system_r:svirt_t:s0:c58,c762</label>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c58,c762</imagelabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <label>+107:+107</label>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <imagelabel>+107:+107</imagelabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </seclabel>
Jan 26 10:13:16 compute-0 nova_compute[254880]: </domain>
Jan 26 10:13:16 compute-0 nova_compute[254880]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.687 254884 WARNING nova.virt.libvirt.driver [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Detaching interface fa:16:3e:37:da:8f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap5a2a6f2c-40' not found.
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.688 254884 DEBUG nova.virt.libvirt.vif [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:11:35Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.688 254884 DEBUG nova.network.os_vif_util [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Converting VIF {"id": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "address": "fa:16:3e:37:da:8f", "network": {"id": "ae1cb66c-0987-4156-9bdb-cb2a08957306", "bridge": "br-int", "label": "tempest-network-smoke--514366077", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2a6f2c-40", "ovs_interfaceid": "5a2a6f2c-40e2-42ce-9d76-e334db61eeb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.689 254884 DEBUG nova.network.os_vif_util [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.689 254884 DEBUG os_vif [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.691 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.691 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a2a6f2c-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.692 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.694 254884 INFO os_vif [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:da:8f,bridge_name='br-int',has_traffic_filtering=True,id=5a2a6f2c-40e2-42ce-9d76-e334db61eeb8,network=Network(ae1cb66c-0987-4156-9bdb-cb2a08957306),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2a6f2c-40')
Jan 26 10:13:16 compute-0 nova_compute[254880]: 2026-01-26 10:13:16.695 254884 DEBUG nova.virt.libvirt.guest [req-9bca51f9-6951-49cf-a93e-6d99491b32e3 req-0626330a-a414-4712-b5cb-95f978227a10 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:name>tempest-TestNetworkBasicOps-server-955673138</nova:name>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:creationTime>2026-01-26 10:13:16</nova:creationTime>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:flavor name="m1.nano">
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:memory>128</nova:memory>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:disk>1</nova:disk>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:swap>0</nova:swap>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:vcpus>1</nova:vcpus>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:flavor>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:owner>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:owner>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   <nova:ports>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     <nova:port uuid="92a5f80f-60e2-449d-9da8-ebaa31f1476c">
Jan 26 10:13:16 compute-0 nova_compute[254880]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:13:16 compute-0 nova_compute[254880]:     </nova:port>
Jan 26 10:13:16 compute-0 nova_compute[254880]:   </nova:ports>
Jan 26 10:13:16 compute-0 nova_compute[254880]: </nova:instance>
Jan 26 10:13:16 compute-0 nova_compute[254880]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 10:13:16 compute-0 podman[269878]: 2026-01-26 10:13:16.758778276 +0000 UTC m=+0.023121973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:13:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:16.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:17 compute-0 podman[269878]: 2026-01-26 10:13:17.004910425 +0000 UTC m=+0.269254092 container create 24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:13:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:17.157Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:13:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:17.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:13:17 compute-0 systemd[1]: Started libpod-conmon-24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7.scope.
Jan 26 10:13:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:13:17 compute-0 podman[269878]: 2026-01-26 10:13:17.246147201 +0000 UTC m=+0.510490888 container init 24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:13:17 compute-0 podman[269878]: 2026-01-26 10:13:17.253595214 +0000 UTC m=+0.517938881 container start 24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nash, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:13:17 compute-0 podman[269878]: 2026-01-26 10:13:17.257155011 +0000 UTC m=+0.521498728 container attach 24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 10:13:17 compute-0 great_nash[269895]: 167 167
Jan 26 10:13:17 compute-0 systemd[1]: libpod-24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7.scope: Deactivated successfully.
Jan 26 10:13:17 compute-0 podman[269878]: 2026-01-26 10:13:17.263939737 +0000 UTC m=+0.528283404 container died 24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-12fd43ef9d462f0bff87c3c65b953617eb8c7b5206cc2adac842df7d6f369b71-merged.mount: Deactivated successfully.
Jan 26 10:13:17 compute-0 podman[269878]: 2026-01-26 10:13:17.321997141 +0000 UTC m=+0.586340818 container remove 24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:13:17 compute-0 systemd[1]: libpod-conmon-24af8a57328ad314c75d03ad127c3c9154c5c733b2b0a70110eba148c7bc82b7.scope: Deactivated successfully.
Jan 26 10:13:17 compute-0 podman[269920]: 2026-01-26 10:13:17.520345386 +0000 UTC m=+0.052798863 container create dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 10:13:17 compute-0 systemd[1]: Started libpod-conmon-dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59.scope.
Jan 26 10:13:17 compute-0 podman[269920]: 2026-01-26 10:13:17.496424983 +0000 UTC m=+0.028878480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:13:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f10ab40fbe192cf2fe2340ba966f7787822382b5e8e604db5c4c84930f4ac0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f10ab40fbe192cf2fe2340ba966f7787822382b5e8e604db5c4c84930f4ac0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f10ab40fbe192cf2fe2340ba966f7787822382b5e8e604db5c4c84930f4ac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f10ab40fbe192cf2fe2340ba966f7787822382b5e8e604db5c4c84930f4ac0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:17 compute-0 podman[269920]: 2026-01-26 10:13:17.616835 +0000 UTC m=+0.149288507 container init dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:13:17 compute-0 podman[269920]: 2026-01-26 10:13:17.624303654 +0000 UTC m=+0.156757131 container start dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 10:13:17 compute-0 podman[269920]: 2026-01-26 10:13:17.628465337 +0000 UTC m=+0.160918824 container attach dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jang, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:13:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 57 op/s
Jan 26 10:13:17 compute-0 gallant_jang[269936]: {
Jan 26 10:13:17 compute-0 gallant_jang[269936]:     "0": [
Jan 26 10:13:17 compute-0 gallant_jang[269936]:         {
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "devices": [
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "/dev/loop3"
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             ],
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "lv_name": "ceph_lv0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "lv_size": "21470642176",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "name": "ceph_lv0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "tags": {
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.cluster_name": "ceph",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.crush_device_class": "",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.encrypted": "0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.osd_id": "0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.type": "block",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.vdo": "0",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:                 "ceph.with_tpm": "0"
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             },
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "type": "block",
Jan 26 10:13:17 compute-0 gallant_jang[269936]:             "vg_name": "ceph_vg0"
Jan 26 10:13:17 compute-0 gallant_jang[269936]:         }
Jan 26 10:13:17 compute-0 gallant_jang[269936]:     ]
Jan 26 10:13:17 compute-0 gallant_jang[269936]: }
Jan 26 10:13:17 compute-0 systemd[1]: libpod-dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59.scope: Deactivated successfully.
Jan 26 10:13:17 compute-0 podman[269920]: 2026-01-26 10:13:17.901569733 +0000 UTC m=+0.434023220 container died dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 10:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6f10ab40fbe192cf2fe2340ba966f7787822382b5e8e604db5c4c84930f4ac0-merged.mount: Deactivated successfully.
Jan 26 10:13:17 compute-0 podman[269920]: 2026-01-26 10:13:17.944945857 +0000 UTC m=+0.477399324 container remove dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:13:17 compute-0 systemd[1]: libpod-conmon-dd33027af0e11896237e57ea39d103e6b867c02ea42031ebed37b56db33cad59.scope: Deactivated successfully.
Jan 26 10:13:18 compute-0 sudo[269811]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:18 compute-0 sudo[269957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:13:18 compute-0 sudo[269957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:18 compute-0 sudo[269957]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:18 compute-0 sudo[269982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:13:18 compute-0 sudo[269982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.243 254884 DEBUG nova.compute.manager [req-5aa63817-bdf7-4737-9336-5778ac20743e req-4e237ad6-4fd2-4351-91d4-e58bed2e43c2 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.245 254884 DEBUG oslo_concurrency.lockutils [req-5aa63817-bdf7-4737-9336-5778ac20743e req-4e237ad6-4fd2-4351-91d4-e58bed2e43c2 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.245 254884 DEBUG oslo_concurrency.lockutils [req-5aa63817-bdf7-4737-9336-5778ac20743e req-4e237ad6-4fd2-4351-91d4-e58bed2e43c2 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.245 254884 DEBUG oslo_concurrency.lockutils [req-5aa63817-bdf7-4737-9336-5778ac20743e req-4e237ad6-4fd2-4351-91d4-e58bed2e43c2 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.245 254884 DEBUG nova.compute.manager [req-5aa63817-bdf7-4737-9336-5778ac20743e req-4e237ad6-4fd2-4351-91d4-e58bed2e43c2 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] No waiting events found dispatching network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.245 254884 WARNING nova.compute.manager [req-5aa63817-bdf7-4737-9336-5778ac20743e req-4e237ad6-4fd2-4351-91d4-e58bed2e43c2 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received unexpected event network-vif-plugged-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 for instance with vm_state active and task_state None.
Jan 26 10:13:18 compute-0 ovn_controller[155832]: 2026-01-26T10:13:18Z|00053|binding|INFO|Releasing lport dcac661c-085c-4e05-b3e8-715548b0fd7e from this chassis (sb_readonly=0)
Jan 26 10:13:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:18.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.340 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.520 254884 INFO nova.network.neutron [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Port 5a2a6f2c-40e2-42ce-9d76-e334db61eeb8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.520 254884 DEBUG nova.network.neutron [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.541 254884 DEBUG oslo_concurrency.lockutils [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:13:18 compute-0 podman[270049]: 2026-01-26 10:13:18.565480577 +0000 UTC m=+0.048293149 container create 1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 10:13:18 compute-0 nova_compute[254880]: 2026-01-26 10:13:18.567 254884 DEBUG oslo_concurrency.lockutils [None req-7cfed95f-6894-4d8a-8873-e3165afedca5 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "interface-26741812-4ddf-457d-b571-7e2005b5133d-5a2a6f2c-40e2-42ce-9d76-e334db61eeb8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:18 compute-0 systemd[1]: Started libpod-conmon-1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208.scope.
Jan 26 10:13:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:18 compute-0 podman[270049]: 2026-01-26 10:13:18.544811272 +0000 UTC m=+0.027623874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:13:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:13:18
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', '.nfs', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:13:18 compute-0 podman[270049]: 2026-01-26 10:13:18.670211386 +0000 UTC m=+0.153023978 container init 1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:13:18 compute-0 podman[270049]: 2026-01-26 10:13:18.676639951 +0000 UTC m=+0.159452523 container start 1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bose, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:13:18 compute-0 podman[270049]: 2026-01-26 10:13:18.680538237 +0000 UTC m=+0.163350849 container attach 1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 10:13:18 compute-0 crazy_bose[270066]: 167 167
Jan 26 10:13:18 compute-0 systemd[1]: libpod-1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208.scope: Deactivated successfully.
Jan 26 10:13:18 compute-0 podman[270049]: 2026-01-26 10:13:18.682586394 +0000 UTC m=+0.165398996 container died 1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bose, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 10:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e76b311b52bb65c2e24e83351fdc98915228557853da0a5555fb6684a8a1dffb-merged.mount: Deactivated successfully.
Jan 26 10:13:18 compute-0 podman[270049]: 2026-01-26 10:13:18.732080655 +0000 UTC m=+0.214893227 container remove 1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_bose, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:13:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:13:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:18 compute-0 systemd[1]: libpod-conmon-1515999a2a47d6f5763fcc578aa3b410360f69e41895a2262c5ee5080812c208.scope: Deactivated successfully.
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:13:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:13:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:18.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:18 compute-0 podman[270091]: 2026-01-26 10:13:18.916107559 +0000 UTC m=+0.049106923 container create b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jang, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:13:18 compute-0 systemd[1]: Started libpod-conmon-b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d.scope.
Jan 26 10:13:18 compute-0 ceph-mon[74456]: pgmap v932: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 57 op/s
Jan 26 10:13:18 compute-0 podman[270091]: 2026-01-26 10:13:18.892959047 +0000 UTC m=+0.025958441 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:13:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9833301d04b3c17d5568cdc94ec3e51373a1e808a92d38d1f7284f727c1e14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9833301d04b3c17d5568cdc94ec3e51373a1e808a92d38d1f7284f727c1e14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9833301d04b3c17d5568cdc94ec3e51373a1e808a92d38d1f7284f727c1e14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9833301d04b3c17d5568cdc94ec3e51373a1e808a92d38d1f7284f727c1e14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:13:19 compute-0 podman[270091]: 2026-01-26 10:13:19.012540841 +0000 UTC m=+0.145540235 container init b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jang, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:13:19 compute-0 podman[270091]: 2026-01-26 10:13:19.019143581 +0000 UTC m=+0.152142945 container start b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:13:19 compute-0 podman[270091]: 2026-01-26 10:13:19.025455584 +0000 UTC m=+0.158454958 container attach b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jang, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007623889960604285 of space, bias 1.0, pg target 0.22871669881812856 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.285 254884 DEBUG nova.compute.manager [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-changed-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.285 254884 DEBUG nova.compute.manager [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing instance network info cache due to event network-changed-92a5f80f-60e2-449d-9da8-ebaa31f1476c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.286 254884 DEBUG oslo_concurrency.lockutils [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.286 254884 DEBUG oslo_concurrency.lockutils [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.286 254884 DEBUG nova.network.neutron [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Refreshing network info cache for port 92a5f80f-60e2-449d-9da8-ebaa31f1476c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.358 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.358 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.358 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.359 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.359 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.360 254884 INFO nova.compute.manager [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Terminating instance
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.361 254884 DEBUG nova.compute.manager [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 10:13:19 compute-0 kernel: tap92a5f80f-60 (unregistering): left promiscuous mode
Jan 26 10:13:19 compute-0 NetworkManager[48970]: <info>  [1769422399.4264] device (tap92a5f80f-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.435 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 ovn_controller[155832]: 2026-01-26T10:13:19Z|00054|binding|INFO|Releasing lport 92a5f80f-60e2-449d-9da8-ebaa31f1476c from this chassis (sb_readonly=0)
Jan 26 10:13:19 compute-0 ovn_controller[155832]: 2026-01-26T10:13:19Z|00055|binding|INFO|Setting lport 92a5f80f-60e2-449d-9da8-ebaa31f1476c down in Southbound
Jan 26 10:13:19 compute-0 ovn_controller[155832]: 2026-01-26T10:13:19Z|00056|binding|INFO|Removing iface tap92a5f80f-60 ovn-installed in OVS
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.443 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.448 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:a5:e7 10.100.0.11'], port_security=['fa:16:3e:1b:a5:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '26741812-4ddf-457d-b571-7e2005b5133d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '150e301c-4333-4419-97ed-4e455dd1f149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc13df43-1d01-44bd-8119-99eabe1edcf4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=92a5f80f-60e2-449d-9da8-ebaa31f1476c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.449 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 92a5f80f-60e2-449d-9da8-ebaa31f1476c in datapath 856aef2b-c9c5-4069-832f-1db92e31d6c2 unbound from our chassis
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.451 166625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 856aef2b-c9c5-4069-832f-1db92e31d6c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.453 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[622aade5-2e2c-4471-9c0d-262709b4c3d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.453 166625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2 namespace which is not needed anymore
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.463 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 26 10:13:19 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 17.139s CPU time.
Jan 26 10:13:19 compute-0 systemd-machined[221254]: Machine qemu-2-instance-00000006 terminated.
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.633 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2[268037]: [NOTICE]   (268041) : haproxy version is 2.8.14-c23fe91
Jan 26 10:13:19 compute-0 neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2[268037]: [NOTICE]   (268041) : path to executable is /usr/sbin/haproxy
Jan 26 10:13:19 compute-0 neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2[268037]: [WARNING]  (268041) : Exiting Master process...
Jan 26 10:13:19 compute-0 neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2[268037]: [ALERT]    (268041) : Current worker (268043) exited with code 143 (Terminated)
Jan 26 10:13:19 compute-0 neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2[268037]: [WARNING]  (268041) : All workers exited. Exiting... (0)
Jan 26 10:13:19 compute-0 systemd[1]: libpod-e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6.scope: Deactivated successfully.
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.650 254884 INFO nova.virt.libvirt.driver [-] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Instance destroyed successfully.
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.651 254884 DEBUG nova.objects.instance [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'resources' on Instance uuid 26741812-4ddf-457d-b571-7e2005b5133d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:13:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 57 op/s
Jan 26 10:13:19 compute-0 podman[270198]: 2026-01-26 10:13:19.656223923 +0000 UTC m=+0.097950955 container died e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.665 254884 DEBUG nova.virt.libvirt.vif [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:11:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-955673138',display_name='tempest-TestNetworkBasicOps-server-955673138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-955673138',id=6,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCEIavFfmzh5bpA5QZf3zq5Gb6QqYI3VELaJd/a0a5TYtMMLwGqLcOYuI5vMKbR7fL+izNWg9808jvE9yRGaxYOyB4XbsZVXNV2ntaIKcWPfcrVa/D66+pB1i/BBWQEzIQ==',key_name='tempest-TestNetworkBasicOps-822391309',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:11:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-wm8zw3uy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:11:35Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=26741812-4ddf-457d-b571-7e2005b5133d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.666 254884 DEBUG nova.network.os_vif_util [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.666 254884 DEBUG nova.network.os_vif_util [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:a5:e7,bridge_name='br-int',has_traffic_filtering=True,id=92a5f80f-60e2-449d-9da8-ebaa31f1476c,network=Network(856aef2b-c9c5-4069-832f-1db92e31d6c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92a5f80f-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.666 254884 DEBUG os_vif [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:a5:e7,bridge_name='br-int',has_traffic_filtering=True,id=92a5f80f-60e2-449d-9da8-ebaa31f1476c,network=Network(856aef2b-c9c5-4069-832f-1db92e31d6c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92a5f80f-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.668 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.669 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92a5f80f-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.670 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.673 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 lvm[270229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:13:19 compute-0 lvm[270229]: VG ceph_vg0 finished
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.677 254884 INFO os_vif [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:a5:e7,bridge_name='br-int',has_traffic_filtering=True,id=92a5f80f-60e2-449d-9da8-ebaa31f1476c,network=Network(856aef2b-c9c5-4069-832f-1db92e31d6c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92a5f80f-60')
Jan 26 10:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6-userdata-shm.mount: Deactivated successfully.
Jan 26 10:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbc56c0fa5845305781e311494871c58bd6083931411c754b6f88c3b9ccc4957-merged.mount: Deactivated successfully.
Jan 26 10:13:19 compute-0 podman[270198]: 2026-01-26 10:13:19.707086771 +0000 UTC m=+0.148813803 container cleanup e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 10:13:19 compute-0 systemd[1]: libpod-conmon-e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6.scope: Deactivated successfully.
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.744 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 tender_jang[270108]: {}
Jan 26 10:13:19 compute-0 podman[270258]: 2026-01-26 10:13:19.789304666 +0000 UTC m=+0.054709694 container remove e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.796 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[eee1756d-282a-462a-930c-93660b0c3b93]: (4, ('Mon Jan 26 10:13:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2 (e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6)\ne30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6\nMon Jan 26 10:13:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2 (e30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6)\ne30ea69b5e8025e3ea46ad7f7537c34caa83ca7acdc3c2adaf1ab11273aa8fd6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.797 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d1410f-a874-4c8d-9965-759e5a732f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.798 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap856aef2b-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.800 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 kernel: tap856aef2b-c0: left promiscuous mode
Jan 26 10:13:19 compute-0 systemd[1]: libpod-b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d.scope: Deactivated successfully.
Jan 26 10:13:19 compute-0 systemd[1]: libpod-b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d.scope: Consumed 1.153s CPU time.
Jan 26 10:13:19 compute-0 conmon[270108]: conmon b3a1231511c97c7182aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d.scope/container/memory.events
Jan 26 10:13:19 compute-0 podman[270091]: 2026-01-26 10:13:19.807855672 +0000 UTC m=+0.940855036 container died b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jang, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 10:13:19 compute-0 nova_compute[254880]: 2026-01-26 10:13:19.816 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.820 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[aafc0ee4-64ee-4c1a-ae16-6d9779f0909e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.834 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[57dd488d-4fdf-4586-be08-e1660b95689e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.836 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[5e14c3e0-bb69-4818-b649-25ea5c95dd35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b9833301d04b3c17d5568cdc94ec3e51373a1e808a92d38d1f7284f727c1e14-merged.mount: Deactivated successfully.
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.850 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[ca47874b-bdf8-46ad-882e-91034ecb68f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424822, 'reachable_time': 30643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270284, 'error': None, 'target': 'ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.853 167020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-856aef2b-c9c5-4069-832f-1db92e31d6c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 10:13:19 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:19.853 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[1d71715b-b8ae-46f4-9b15-894171a0afd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:13:19 compute-0 podman[270091]: 2026-01-26 10:13:19.853752435 +0000 UTC m=+0.986751799 container remove b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_jang, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:13:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d856aef2b\x2dc9c5\x2d4069\x2d832f\x2d1db92e31d6c2.mount: Deactivated successfully.
Jan 26 10:13:19 compute-0 systemd[1]: libpod-conmon-b3a1231511c97c7182aac5f1b7b1fd87d9e2a07c8d94e58b20d642176e2ce50d.scope: Deactivated successfully.
Jan 26 10:13:19 compute-0 sudo[269982]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:13:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:13:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:20 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:20 compute-0 sudo[270289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:13:20 compute-0 sudo[270289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:20 compute-0 sudo[270289]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.257 254884 INFO nova.virt.libvirt.driver [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Deleting instance files /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d_del
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.258 254884 INFO nova.virt.libvirt.driver [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Deletion of /var/lib/nova/instances/26741812-4ddf-457d-b571-7e2005b5133d_del complete
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.327 254884 INFO nova.compute.manager [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Took 0.97 seconds to destroy the instance on the hypervisor.
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.328 254884 DEBUG oslo.service.loopingcall [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.329 254884 DEBUG nova.compute.manager [-] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.329 254884 DEBUG nova.network.neutron [-] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 10:13:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:20.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.344 254884 DEBUG nova.compute.manager [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-unplugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.345 254884 DEBUG oslo_concurrency.lockutils [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.345 254884 DEBUG oslo_concurrency.lockutils [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.345 254884 DEBUG oslo_concurrency.lockutils [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.345 254884 DEBUG nova.compute.manager [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] No waiting events found dispatching network-vif-unplugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.345 254884 DEBUG nova.compute.manager [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-unplugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.346 254884 DEBUG nova.compute.manager [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.346 254884 DEBUG oslo_concurrency.lockutils [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "26741812-4ddf-457d-b571-7e2005b5133d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.346 254884 DEBUG oslo_concurrency.lockutils [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.346 254884 DEBUG oslo_concurrency.lockutils [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.346 254884 DEBUG nova.compute.manager [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] No waiting events found dispatching network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.346 254884 WARNING nova.compute.manager [req-8100bf45-5533-4559-888c-4ca55b4579a4 req-cf785254-343a-4e96-aa57-b54d7ce6075a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received unexpected event network-vif-plugged-92a5f80f-60e2-449d-9da8-ebaa31f1476c for instance with vm_state active and task_state deleting.
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.370 254884 DEBUG nova.network.neutron [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updated VIF entry in instance network info cache for port 92a5f80f-60e2-449d-9da8-ebaa31f1476c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.370 254884 DEBUG nova.network.neutron [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [{"id": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "address": "fa:16:3e:1b:a5:e7", "network": {"id": "856aef2b-c9c5-4069-832f-1db92e31d6c2", "bridge": "br-int", "label": "tempest-network-smoke--1174108761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92a5f80f-60", "ovs_interfaceid": "92a5f80f-60e2-449d-9da8-ebaa31f1476c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.387 254884 DEBUG oslo_concurrency.lockutils [req-f14d649c-cf27-4f7f-9faf-ebc09043af05 req-a0764405-0d36-4fc6-bd0a-31af22f1c0b6 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-26741812-4ddf-457d-b571-7e2005b5133d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.784 254884 DEBUG nova.network.neutron [-] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.801 254884 INFO nova.compute.manager [-] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Took 0.47 seconds to deallocate network for instance.
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.843 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.843 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:20 compute-0 nova_compute[254880]: 2026-01-26 10:13:20.887 254884 DEBUG oslo_concurrency.processutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:13:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:20.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:21 compute-0 ceph-mon[74456]: pgmap v933: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 57 op/s
Jan 26 10:13:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:13:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:13:21 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042908871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:21 compute-0 nova_compute[254880]: 2026-01-26 10:13:21.375 254884 DEBUG oslo_concurrency.processutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:13:21 compute-0 nova_compute[254880]: 2026-01-26 10:13:21.381 254884 DEBUG nova.compute.provider_tree [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:13:21 compute-0 nova_compute[254880]: 2026-01-26 10:13:21.546 254884 DEBUG nova.scheduler.client.report [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:13:21 compute-0 nova_compute[254880]: 2026-01-26 10:13:21.573 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:21 compute-0 nova_compute[254880]: 2026-01-26 10:13:21.607 254884 INFO nova.scheduler.client.report [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Deleted allocations for instance 26741812-4ddf-457d-b571-7e2005b5133d
Jan 26 10:13:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 17 KiB/s wr, 66 op/s
Jan 26 10:13:21 compute-0 nova_compute[254880]: 2026-01-26 10:13:21.802 254884 DEBUG oslo_concurrency.lockutils [None req-69b5c4fa-7a74-4907-a142-cefab32dc869 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "26741812-4ddf-457d-b571-7e2005b5133d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:22.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:22 compute-0 nova_compute[254880]: 2026-01-26 10:13:22.619 254884 DEBUG nova.compute.manager [req-c1711c80-a060-451f-9ded-726b0d80c2a0 req-3e9999a8-51a8-4607-a196-666b285a4639 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Received event network-vif-deleted-92a5f80f-60e2-449d-9da8-ebaa31f1476c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:13:22 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4042908871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:22 compute-0 ceph-mon[74456]: pgmap v934: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 17 KiB/s wr, 66 op/s
Jan 26 10:13:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:22.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 37 op/s
Jan 26 10:13:23 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:23.878 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:13:23 compute-0 nova_compute[254880]: 2026-01-26 10:13:23.879 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:23 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:23.880 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:13:24 compute-0 sudo[270341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:13:24 compute-0 sudo[270341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:24 compute-0 sudo[270341]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:24.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:24 compute-0 nova_compute[254880]: 2026-01-26 10:13:24.670 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:24 compute-0 nova_compute[254880]: 2026-01-26 10:13:24.746 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:24 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:24.882 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:13:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:24.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:25 compute-0 ceph-mon[74456]: pgmap v935: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 37 op/s
Jan 26 10:13:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 57 op/s
Jan 26 10:13:26 compute-0 ceph-mon[74456]: pgmap v936: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 57 op/s
Jan 26 10:13:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:26.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:26] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:13:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:26] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:13:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:26.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:27.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:13:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 26 10:13:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:28 compute-0 ceph-mon[74456]: pgmap v937: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 26 10:13:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:28.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 26 10:13:29 compute-0 nova_compute[254880]: 2026-01-26 10:13:29.673 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:29 compute-0 nova_compute[254880]: 2026-01-26 10:13:29.748 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:30 compute-0 podman[270372]: 2026-01-26 10:13:30.167979033 +0000 UTC m=+0.099373344 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 10:13:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:30.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:30 compute-0 ceph-mon[74456]: pgmap v938: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 26 10:13:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:31 compute-0 nova_compute[254880]: 2026-01-26 10:13:31.541 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:31 compute-0 nova_compute[254880]: 2026-01-26 10:13:31.622 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Jan 26 10:13:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:32.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:32 compute-0 nova_compute[254880]: 2026-01-26 10:13:32.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:32 compute-0 nova_compute[254880]: 2026-01-26 10:13:32.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:33 compute-0 ceph-mon[74456]: pgmap v939: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Jan 26 10:13:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 938 B/s wr, 20 op/s
Jan 26 10:13:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:13:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:34 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 10:13:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:34.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:34 compute-0 ceph-mon[74456]: pgmap v940: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 938 B/s wr, 20 op/s
Jan 26 10:13:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.509 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.509 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.509 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.510 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.510 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.648 254884 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769422399.6468198, 26741812-4ddf-457d-b571-7e2005b5133d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.648 254884 INFO nova.compute.manager [-] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] VM Stopped (Lifecycle Event)
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.715 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.730 254884 DEBUG nova.compute.manager [None req-4c5a8d92-535e-46b5-8a86-d9318c7aabf7 - - - - - -] [instance: 26741812-4ddf-457d-b571-7e2005b5133d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:13:34 compute-0 nova_compute[254880]: 2026-01-26 10:13:34.750 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:34.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:13:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648272998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.008 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.162 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.164 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4559MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.164 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.164 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.361 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.361 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.393 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:13:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1648272998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2322081899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 938 B/s wr, 20 op/s
Jan 26 10:13:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:13:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3631170278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.838 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.844 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.870 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.899 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:13:35 compute-0 nova_compute[254880]: 2026-01-26 10:13:35.900 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:36.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:36] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:13:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:36] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:13:36 compute-0 ceph-mon[74456]: pgmap v941: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 938 B/s wr, 20 op/s
Jan 26 10:13:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3631170278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1416506356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:36 compute-0 nova_compute[254880]: 2026-01-26 10:13:36.900 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:36 compute-0 nova_compute[254880]: 2026-01-26 10:13:36.901 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:13:36 compute-0 nova_compute[254880]: 2026-01-26 10:13:36.901 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:13:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:36.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:36 compute-0 nova_compute[254880]: 2026-01-26 10:13:36.965 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:13:36 compute-0 nova_compute[254880]: 2026-01-26 10:13:36.966 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:36 compute-0 nova_compute[254880]: 2026-01-26 10:13:36.967 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:37.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:13:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:37 compute-0 nova_compute[254880]: 2026-01-26 10:13:37.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:38.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:38.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:38 compute-0 nova_compute[254880]: 2026-01-26 10:13:38.953 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:38 compute-0 nova_compute[254880]: 2026-01-26 10:13:38.954 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:39 compute-0 ceph-mon[74456]: pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2944046988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/486237949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:13:39 compute-0 nova_compute[254880]: 2026-01-26 10:13:39.188 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:39 compute-0 nova_compute[254880]: 2026-01-26 10:13:39.718 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:39 compute-0 nova_compute[254880]: 2026-01-26 10:13:39.752 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:40 compute-0 ceph-mon[74456]: pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:40.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:40.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:40 compute-0 nova_compute[254880]: 2026-01-26 10:13:40.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:13:40 compute-0 nova_compute[254880]: 2026-01-26 10:13:40.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:13:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:42.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:42 compute-0 ceph-mon[74456]: pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:42.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:44.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:44 compute-0 sudo[270460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:13:44 compute-0 sudo[270460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:13:44 compute-0 sudo[270460]: pam_unix(sudo:session): session closed for user root
Jan 26 10:13:44 compute-0 nova_compute[254880]: 2026-01-26 10:13:44.720 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:44 compute-0 nova_compute[254880]: 2026-01-26 10:13:44.755 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:44.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:45 compute-0 ceph-mon[74456]: pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:46.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:46 compute-0 ceph-mon[74456]: pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:13:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:46] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:13:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:46.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:47 compute-0 podman[270490]: 2026-01-26 10:13:47.134057088 +0000 UTC m=+0.065227152 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:13:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:47.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:13:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:48.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:13:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:48 compute-0 ceph-mon[74456]: pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:13:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:13:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:13:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:13:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:13:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:13:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:49 compute-0 nova_compute[254880]: 2026-01-26 10:13:49.722 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:49 compute-0 nova_compute[254880]: 2026-01-26 10:13:49.756 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:13:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:50.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:50 compute-0 sshd-session[270514]: Invalid user zabbix from 157.245.76.178 port 37852
Jan 26 10:13:50 compute-0 sshd-session[270514]: Connection closed by invalid user zabbix 157.245.76.178 port 37852 [preauth]
Jan 26 10:13:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:13:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:50.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:13:50 compute-0 ceph-mon[74456]: pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:52.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:52.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:53 compute-0 ceph-mon[74456]: pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:54 compute-0 ceph-mon[74456]: pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:54.698 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:13:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:54.698 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:13:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:13:54.699 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:13:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:54.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:54 compute-0 nova_compute[254880]: 2026-01-26 10:13:54.758 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:13:54 compute-0 nova_compute[254880]: 2026-01-26 10:13:54.760 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:13:54 compute-0 nova_compute[254880]: 2026-01-26 10:13:54.760 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 26 10:13:54 compute-0 nova_compute[254880]: 2026-01-26 10:13:54.760 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:13:54 compute-0 nova_compute[254880]: 2026-01-26 10:13:54.771 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:13:54 compute-0 nova_compute[254880]: 2026-01-26 10:13:54.772 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:13:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:54.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:56] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:13:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:13:56] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:13:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:56.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:56 compute-0 ceph-mon[74456]: pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:13:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:13:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:56.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:13:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:13:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:13:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:13:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:13:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:13:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:13:57.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:13:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 10:13:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465910940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:13:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 10:13:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465910940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:13:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:13:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 26 10:13:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:13:58.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 26 10:13:58 compute-0 ceph-mon[74456]: pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/465910940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:13:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/465910940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:13:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:13:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:13:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:13:58.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:13:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:13:59 compute-0 nova_compute[254880]: 2026-01-26 10:13:59.773 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:14:00 compute-0 ceph-mon[74456]: pgmap v953: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:14:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:00.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:14:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:00.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:01 compute-0 podman[270526]: 2026-01-26 10:14:01.198148373 +0000 UTC m=+0.127635785 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:14:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:14:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:02 compute-0 ceph-mon[74456]: pgmap v954: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:14:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:02.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:14:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:04 compute-0 sudo[270556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:14:04 compute-0 sudo[270556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:04 compute-0 sudo[270556]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:14:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:04.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:14:04 compute-0 nova_compute[254880]: 2026-01-26 10:14:04.775 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:14:04 compute-0 nova_compute[254880]: 2026-01-26 10:14:04.775 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:04 compute-0 nova_compute[254880]: 2026-01-26 10:14:04.775 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 26 10:14:04 compute-0 nova_compute[254880]: 2026-01-26 10:14:04.775 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:14:04 compute-0 nova_compute[254880]: 2026-01-26 10:14:04.776 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:14:04 compute-0 nova_compute[254880]: 2026-01-26 10:14:04.777 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:04 compute-0 ceph-mon[74456]: pgmap v955: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:04.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:14:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/12435975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:14:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:14:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:14:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:06.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:14:06 compute-0 ceph-mon[74456]: pgmap v956: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:14:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:06.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:07.161Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:14:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:08.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:08 compute-0 ceph-mon[74456]: pgmap v957: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:08.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:09 compute-0 nova_compute[254880]: 2026-01-26 10:14:09.777 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:10.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:10 compute-0 ceph-mon[74456]: pgmap v958: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:10.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/49620544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:14:11 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3883266146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:14:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:12.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:12 compute-0 ceph-mon[74456]: pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:12.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:14 compute-0 ceph-mon[74456]: pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.160595) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422454160631, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1087, "num_deletes": 501, "total_data_size": 1287958, "memory_usage": 1318880, "flush_reason": "Manual Compaction"}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422454170016, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1267861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28426, "largest_seqno": 29512, "table_properties": {"data_size": 1263060, "index_size": 1877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14321, "raw_average_key_size": 19, "raw_value_size": 1251336, "raw_average_value_size": 1704, "num_data_blocks": 82, "num_entries": 734, "num_filter_entries": 734, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422388, "oldest_key_time": 1769422388, "file_creation_time": 1769422454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 9462 microseconds, and 3355 cpu microseconds.
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.170055) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1267861 bytes OK
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.170072) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.171828) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.171839) EVENT_LOG_v1 {"time_micros": 1769422454171836, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.171854) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1281900, prev total WAL file size 1281900, number of live WAL files 2.
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.172358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1238KB)], [62(16MB)]
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422454172412, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18552272, "oldest_snapshot_seqno": -1}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5759 keys, 12337259 bytes, temperature: kUnknown
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422454265145, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12337259, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12300662, "index_size": 21127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 148911, "raw_average_key_size": 25, "raw_value_size": 12198418, "raw_average_value_size": 2118, "num_data_blocks": 846, "num_entries": 5759, "num_filter_entries": 5759, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.265518) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12337259 bytes
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.268382) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.7 rd, 132.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 16.5 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(24.4) write-amplify(9.7) OK, records in: 6776, records dropped: 1017 output_compression: NoCompression
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.268401) EVENT_LOG_v1 {"time_micros": 1769422454268392, "job": 34, "event": "compaction_finished", "compaction_time_micros": 92904, "compaction_time_cpu_micros": 25116, "output_level": 6, "num_output_files": 1, "total_output_size": 12337259, "num_input_records": 6776, "num_output_records": 5759, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422454268728, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422454271965, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.172287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.272144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.272150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.272152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.272155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:14:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:14:14.272157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:14:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:14.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:14 compute-0 nova_compute[254880]: 2026-01-26 10:14:14.778 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:14:14 compute-0 nova_compute[254880]: 2026-01-26 10:14:14.779 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:14 compute-0 nova_compute[254880]: 2026-01-26 10:14:14.779 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 26 10:14:14 compute-0 nova_compute[254880]: 2026-01-26 10:14:14.779 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:14:14 compute-0 nova_compute[254880]: 2026-01-26 10:14:14.779 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:14:14 compute-0 nova_compute[254880]: 2026-01-26 10:14:14.780 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:14.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:14:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:14:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:14:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:16.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:16 compute-0 ceph-mon[74456]: pgmap v961: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:14:16 compute-0 ovn_controller[155832]: 2026-01-26T10:14:16Z|00057|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 26 10:14:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:16.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:17.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:14:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:17.162Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:14:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:14:18 compute-0 podman[270594]: 2026-01-26 10:14:18.121100636 +0000 UTC m=+0.050491336 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:14:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:14:18
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'backups', 'vms', 'default.rgw.control', 'images', 'default.rgw.meta']
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:14:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:18.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:14:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:14:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:14:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:18.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:18 compute-0 ceph-mon[74456]: pgmap v962: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 10:14:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:14:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 10:14:19 compute-0 nova_compute[254880]: 2026-01-26 10:14:19.781 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:14:19 compute-0 nova_compute[254880]: 2026-01-26 10:14:19.782 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:19 compute-0 nova_compute[254880]: 2026-01-26 10:14:19.782 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 26 10:14:19 compute-0 nova_compute[254880]: 2026-01-26 10:14:19.782 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:14:19 compute-0 nova_compute[254880]: 2026-01-26 10:14:19.783 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 10:14:19 compute-0 nova_compute[254880]: 2026-01-26 10:14:19.783 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:20 compute-0 ceph-mon[74456]: pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 10:14:20 compute-0 sudo[270615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:14:20 compute-0 sudo[270615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:20 compute-0 sudo[270615]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:20 compute-0 sudo[270640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:14:20 compute-0 sudo[270640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:20 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 10:14:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:20.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:20 compute-0 sudo[270640]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:20.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:14:21 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:14:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:14:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:14:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:14:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:14:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:14:21 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:14:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:14:21 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:14:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:14:21 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:14:21 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:14:21 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:14:21 compute-0 sudo[270698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:14:21 compute-0 sudo[270698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:21 compute-0 sudo[270698]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:21 compute-0 sudo[270723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:14:21 compute-0 sudo[270723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 10:14:21 compute-0 podman[270788]: 2026-01-26 10:14:21.849320079 +0000 UTC m=+0.020129661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:14:21 compute-0 podman[270788]: 2026-01-26 10:14:21.947223537 +0000 UTC m=+0.118033119 container create 018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_rosalind, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 10:14:21 compute-0 systemd[1]: Started libpod-conmon-018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8.scope.
Jan 26 10:14:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:14:22 compute-0 podman[270788]: 2026-01-26 10:14:22.032556027 +0000 UTC m=+0.203365629 container init 018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_rosalind, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:14:22 compute-0 podman[270788]: 2026-01-26 10:14:22.039632247 +0000 UTC m=+0.210441829 container start 018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 10:14:22 compute-0 podman[270788]: 2026-01-26 10:14:22.042543655 +0000 UTC m=+0.213353237 container attach 018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 26 10:14:22 compute-0 beautiful_rosalind[270804]: 167 167
Jan 26 10:14:22 compute-0 systemd[1]: libpod-018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8.scope: Deactivated successfully.
Jan 26 10:14:22 compute-0 podman[270788]: 2026-01-26 10:14:22.047394226 +0000 UTC m=+0.218203808 container died 018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_rosalind, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1646c5e8a47bb40e896356e5b3828b353025a3c906a3da0ec8a8ab79998021d-merged.mount: Deactivated successfully.
Jan 26 10:14:22 compute-0 podman[270788]: 2026-01-26 10:14:22.105242468 +0000 UTC m=+0.276052050 container remove 018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:14:22 compute-0 systemd[1]: libpod-conmon-018ed94228bc12ec251f6eb7214c673bc0aded32edeb2272f8ac7e62b362aee8.scope: Deactivated successfully.
Jan 26 10:14:22 compute-0 podman[270827]: 2026-01-26 10:14:22.308556485 +0000 UTC m=+0.032209255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:14:22 compute-0 podman[270827]: 2026-01-26 10:14:22.426130611 +0000 UTC m=+0.149783331 container create 8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 10:14:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:14:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:14:22 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:14:22 compute-0 ceph-mon[74456]: pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 10:14:22 compute-0 systemd[1]: Started libpod-conmon-8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8.scope.
Jan 26 10:14:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c9238819a6d6f7e86015df839b2dd6b951c09d8932754cf5fd38f2557dfefe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c9238819a6d6f7e86015df839b2dd6b951c09d8932754cf5fd38f2557dfefe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c9238819a6d6f7e86015df839b2dd6b951c09d8932754cf5fd38f2557dfefe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c9238819a6d6f7e86015df839b2dd6b951c09d8932754cf5fd38f2557dfefe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c9238819a6d6f7e86015df839b2dd6b951c09d8932754cf5fd38f2557dfefe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:22 compute-0 podman[270827]: 2026-01-26 10:14:22.513891377 +0000 UTC m=+0.237544097 container init 8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:14:22 compute-0 podman[270827]: 2026-01-26 10:14:22.524899473 +0000 UTC m=+0.248552193 container start 8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:14:22 compute-0 podman[270827]: 2026-01-26 10:14:22.656796993 +0000 UTC m=+0.380449723 container attach 8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 10:14:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:22 compute-0 angry_nobel[270845]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:14:22 compute-0 angry_nobel[270845]: --> All data devices are unavailable
Jan 26 10:14:22 compute-0 systemd[1]: libpod-8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8.scope: Deactivated successfully.
Jan 26 10:14:22 compute-0 podman[270827]: 2026-01-26 10:14:22.86679053 +0000 UTC m=+0.590443250 container died 8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:14:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:22.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c9238819a6d6f7e86015df839b2dd6b951c09d8932754cf5fd38f2557dfefe-merged.mount: Deactivated successfully.
Jan 26 10:14:23 compute-0 podman[270827]: 2026-01-26 10:14:23.161442568 +0000 UTC m=+0.885095288 container remove 8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 10:14:23 compute-0 systemd[1]: libpod-conmon-8651969f7ba121b9f7275ae53aa782024c75a6c54a3d49dd60fc5661aa05ccb8.scope: Deactivated successfully.
Jan 26 10:14:23 compute-0 sudo[270723]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:23 compute-0 sudo[270874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:14:23 compute-0 sudo[270874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:23 compute-0 sudo[270874]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:23 compute-0 sudo[270899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:14:23 compute-0 sudo[270899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:23 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:23 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3329353095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:23 compute-0 podman[270965]: 2026-01-26 10:14:23.783587869 +0000 UTC m=+0.022939348 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:14:24 compute-0 podman[270965]: 2026-01-26 10:14:24.100420843 +0000 UTC m=+0.339772332 container create 4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_fermi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 26 10:14:24 compute-0 systemd[1]: Started libpod-conmon-4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662.scope.
Jan 26 10:14:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:14:24 compute-0 podman[270965]: 2026-01-26 10:14:24.344106824 +0000 UTC m=+0.583458283 container init 4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_fermi, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 10:14:24 compute-0 podman[270965]: 2026-01-26 10:14:24.351519012 +0000 UTC m=+0.590870461 container start 4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_fermi, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:14:24 compute-0 compassionate_fermi[270982]: 167 167
Jan 26 10:14:24 compute-0 systemd[1]: libpod-4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662.scope: Deactivated successfully.
Jan 26 10:14:24 compute-0 podman[270965]: 2026-01-26 10:14:24.404959927 +0000 UTC m=+0.644311376 container attach 4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_fermi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 10:14:24 compute-0 podman[270965]: 2026-01-26 10:14:24.405513122 +0000 UTC m=+0.644864571 container died 4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 10:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e78276e57a1fa7f0f11f44ee942a0306f6d80d95129f327affe02e07e48f33e-merged.mount: Deactivated successfully.
Jan 26 10:14:24 compute-0 podman[270965]: 2026-01-26 10:14:24.515835354 +0000 UTC m=+0.755186803 container remove 4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_fermi, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:14:24 compute-0 systemd[1]: libpod-conmon-4b42c31ea182b6a5b02891aac4ec9578d4908e17c0e64db00b10ad5a56520662.scope: Deactivated successfully.
Jan 26 10:14:24 compute-0 sudo[271002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:14:24 compute-0 sudo[271002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:24 compute-0 sudo[271002]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:24.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:24 compute-0 podman[271032]: 2026-01-26 10:14:24.678337025 +0000 UTC m=+0.025082694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:14:24 compute-0 podman[271032]: 2026-01-26 10:14:24.776633424 +0000 UTC m=+0.123379073 container create a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jang, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:14:24 compute-0 ceph-mon[74456]: pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:24 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:24.784 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:14:24 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:24.786 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:14:24 compute-0 nova_compute[254880]: 2026-01-26 10:14:24.786 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:24 compute-0 nova_compute[254880]: 2026-01-26 10:14:24.789 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:14:24 compute-0 systemd[1]: Started libpod-conmon-a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910.scope.
Jan 26 10:14:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ff97cf03f2d8f867ee63592e089c20816fca89c1cba68934b12bb25c8b6efc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ff97cf03f2d8f867ee63592e089c20816fca89c1cba68934b12bb25c8b6efc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ff97cf03f2d8f867ee63592e089c20816fca89c1cba68934b12bb25c8b6efc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2ff97cf03f2d8f867ee63592e089c20816fca89c1cba68934b12bb25c8b6efc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:24 compute-0 podman[271032]: 2026-01-26 10:14:24.864334948 +0000 UTC m=+0.211080617 container init a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 10:14:24 compute-0 podman[271032]: 2026-01-26 10:14:24.872004573 +0000 UTC m=+0.218750222 container start a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:14:24 compute-0 podman[271032]: 2026-01-26 10:14:24.87706467 +0000 UTC m=+0.223810319 container attach a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jang, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:14:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:24.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:25 compute-0 exciting_jang[271050]: {
Jan 26 10:14:25 compute-0 exciting_jang[271050]:     "0": [
Jan 26 10:14:25 compute-0 exciting_jang[271050]:         {
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "devices": [
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "/dev/loop3"
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             ],
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "lv_name": "ceph_lv0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "lv_size": "21470642176",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "name": "ceph_lv0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "tags": {
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.cluster_name": "ceph",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.crush_device_class": "",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.encrypted": "0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.osd_id": "0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.type": "block",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.vdo": "0",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:                 "ceph.with_tpm": "0"
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             },
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "type": "block",
Jan 26 10:14:25 compute-0 exciting_jang[271050]:             "vg_name": "ceph_vg0"
Jan 26 10:14:25 compute-0 exciting_jang[271050]:         }
Jan 26 10:14:25 compute-0 exciting_jang[271050]:     ]
Jan 26 10:14:25 compute-0 exciting_jang[271050]: }
Jan 26 10:14:25 compute-0 systemd[1]: libpod-a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910.scope: Deactivated successfully.
Jan 26 10:14:25 compute-0 podman[271032]: 2026-01-26 10:14:25.200297535 +0000 UTC m=+0.547043184 container died a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:14:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2ff97cf03f2d8f867ee63592e089c20816fca89c1cba68934b12bb25c8b6efc-merged.mount: Deactivated successfully.
Jan 26 10:14:25 compute-0 podman[271032]: 2026-01-26 10:14:25.507232304 +0000 UTC m=+0.853977953 container remove a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_jang, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:14:25 compute-0 systemd[1]: libpod-conmon-a3f9c05711d08d0053064d722b47bf00e8dfac90125bc8d47d3de219eaf66910.scope: Deactivated successfully.
Jan 26 10:14:25 compute-0 sudo[270899]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:25 compute-0 sudo[271073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:14:25 compute-0 sudo[271073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:25 compute-0 sudo[271073]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:25 compute-0 sudo[271098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:14:25 compute-0 sudo[271098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:26 compute-0 podman[271162]: 2026-01-26 10:14:26.073161295 +0000 UTC m=+0.048587536 container create 368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:14:26 compute-0 systemd[1]: Started libpod-conmon-368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff.scope.
Jan 26 10:14:26 compute-0 podman[271162]: 2026-01-26 10:14:26.04694127 +0000 UTC m=+0.022367532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:14:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:14:26 compute-0 podman[271162]: 2026-01-26 10:14:26.210722247 +0000 UTC m=+0.186148508 container init 368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:14:26 compute-0 podman[271162]: 2026-01-26 10:14:26.218728162 +0000 UTC m=+0.194154413 container start 368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:14:26 compute-0 compassionate_saha[271178]: 167 167
Jan 26 10:14:26 compute-0 podman[271162]: 2026-01-26 10:14:26.222512444 +0000 UTC m=+0.197938715 container attach 368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_saha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:14:26 compute-0 systemd[1]: libpod-368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff.scope: Deactivated successfully.
Jan 26 10:14:26 compute-0 podman[271162]: 2026-01-26 10:14:26.223790847 +0000 UTC m=+0.199217088 container died 368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_saha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1489f05a9205a864372ac17fe18f3ecd9aa71e5b05f6877cfa98fe39e24aa3c2-merged.mount: Deactivated successfully.
Jan 26 10:14:26 compute-0 podman[271162]: 2026-01-26 10:14:26.356385437 +0000 UTC m=+0.331811678 container remove 368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_saha, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:14:26 compute-0 systemd[1]: libpod-conmon-368266b96010fc12d5fd428c3593f16a00cc9fe4aecd503d55c208b24cf7b8ff.scope: Deactivated successfully.
Jan 26 10:14:26 compute-0 podman[271206]: 2026-01-26 10:14:26.615720967 +0000 UTC m=+0.088684621 container create 69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 26 10:14:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:26] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 26 10:14:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:26] "GET /metrics HTTP/1.1" 200 48483 "" "Prometheus/2.51.0"
Jan 26 10:14:26 compute-0 podman[271206]: 2026-01-26 10:14:26.558256205 +0000 UTC m=+0.031219889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:14:26 compute-0 systemd[1]: Started libpod-conmon-69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd.scope.
Jan 26 10:14:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0051e726cb0c3c27f50b27856d3070f58ad870964792e15876162a09c067369c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0051e726cb0c3c27f50b27856d3070f58ad870964792e15876162a09c067369c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0051e726cb0c3c27f50b27856d3070f58ad870964792e15876162a09c067369c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0051e726cb0c3c27f50b27856d3070f58ad870964792e15876162a09c067369c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:26 compute-0 podman[271206]: 2026-01-26 10:14:26.708311363 +0000 UTC m=+0.181275047 container init 69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Jan 26 10:14:26 compute-0 podman[271206]: 2026-01-26 10:14:26.714778936 +0000 UTC m=+0.187742600 container start 69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 26 10:14:26 compute-0 podman[271206]: 2026-01-26 10:14:26.719584766 +0000 UTC m=+0.192548430 container attach 69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 10:14:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:26.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:26 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:26.789 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:26.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:27 compute-0 ceph-mon[74456]: pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:27.163Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:14:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:27.163Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:14:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:27.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:14:27 compute-0 lvm[271298]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:14:27 compute-0 lvm[271298]: VG ceph_vg0 finished
Jan 26 10:14:27 compute-0 loving_burnell[271223]: {}
Jan 26 10:14:27 compute-0 systemd[1]: libpod-69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd.scope: Deactivated successfully.
Jan 26 10:14:27 compute-0 systemd[1]: libpod-69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd.scope: Consumed 1.094s CPU time.
Jan 26 10:14:27 compute-0 podman[271206]: 2026-01-26 10:14:27.388552641 +0000 UTC m=+0.861516295 container died 69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0051e726cb0c3c27f50b27856d3070f58ad870964792e15876162a09c067369c-merged.mount: Deactivated successfully.
Jan 26 10:14:27 compute-0 podman[271206]: 2026-01-26 10:14:27.47230156 +0000 UTC m=+0.945265214 container remove 69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_burnell, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 10:14:27 compute-0 systemd[1]: libpod-conmon-69ffab0423d9ee2f993cc2dd193a538550b35a92e56ab9b6ffac9826886257bd.scope: Deactivated successfully.
Jan 26 10:14:27 compute-0 sudo[271098]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:14:27 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:27 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:14:27 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:27 compute-0 sudo[271317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:14:27 compute-0 sudo[271317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:27 compute-0 sudo[271317]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 10:14:28 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:28 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:14:28 compute-0 ceph-mon[74456]: pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 10:14:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:14:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:28.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:14:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:28.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 10:14:29 compute-0 nova_compute[254880]: 2026-01-26 10:14:29.786 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:30.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:30 compute-0 ceph-mon[74456]: pgmap v968: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 10:14:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:30.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.336 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.337 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.354 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.428 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.429 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.438 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.439 254884 INFO nova.compute.claims [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.536 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:14:31 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1730871621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.991 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:31 compute-0 nova_compute[254880]: 2026-01-26 10:14:31.997 254884 DEBUG nova.compute.provider_tree [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:14:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.013 254884 DEBUG nova.scheduler.client.report [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.033 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.034 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.084 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.086 254884 DEBUG nova.network.neutron [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.108 254884 INFO nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.123 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 10:14:32 compute-0 podman[271368]: 2026-01-26 10:14:32.163585963 +0000 UTC m=+0.089675118 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.231 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.232 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.232 254884 INFO nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Creating image(s)
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.257 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.284 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.311 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.315 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.389 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.390 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "d81880e926e175d0cc7241caa7cc18231a8a289c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.391 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.392 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.422 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.426 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c c05c1aad-49b9-43df-99b6-602b689d2c8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:32 compute-0 sshd-session[271449]: Invalid user zabbix from 157.245.76.178 port 42462
Jan 26 10:14:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:32.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.753 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c c05c1aad-49b9-43df-99b6-602b689d2c8d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:32 compute-0 ceph-mon[74456]: pgmap v969: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:32 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1730871621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:32 compute-0 sshd-session[271449]: Connection closed by invalid user zabbix 157.245.76.178 port 42462 [preauth]
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.848 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] resizing rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.952 254884 DEBUG nova.objects.instance [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'migration_context' on Instance uuid c05c1aad-49b9-43df-99b6-602b689d2c8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.988 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.988 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.988 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.988 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:14:32 compute-0 nova_compute[254880]: 2026-01-26 10:14:32.989 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:32.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.004 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.006 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Ensure instance console log exists: /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.007 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.007 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.008 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:14:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807772300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.422 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.567 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.568 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.568 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.569 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.660 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Instance c05c1aad-49b9-43df-99b6-602b689d2c8d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.660 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.660 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:14:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:33 compute-0 nova_compute[254880]: 2026-01-26 10:14:33.704 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:14:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:33 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3807772300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:14:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674525483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:34 compute-0 nova_compute[254880]: 2026-01-26 10:14:34.170 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:34 compute-0 nova_compute[254880]: 2026-01-26 10:14:34.175 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:14:34 compute-0 nova_compute[254880]: 2026-01-26 10:14:34.202 254884 DEBUG nova.policy [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1208d3e25b940ea93fe76884c7a53db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 10:14:34 compute-0 nova_compute[254880]: 2026-01-26 10:14:34.207 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:14:34 compute-0 nova_compute[254880]: 2026-01-26 10:14:34.240 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:14:34 compute-0 nova_compute[254880]: 2026-01-26 10:14:34.240 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:34 compute-0 nova_compute[254880]: 2026-01-26 10:14:34.789 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:34 compute-0 ceph-mon[74456]: pgmap v970: 353 pgs: 353 active+clean; 41 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2674525483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:35.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.091 254884 DEBUG nova.network.neutron [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Successfully updated port: 386a7730-6a16-4b18-b368-561762a8f7af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.105 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "refresh_cache-c05c1aad-49b9-43df-99b6-602b689d2c8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.106 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquired lock "refresh_cache-c05c1aad-49b9-43df-99b6-602b689d2c8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.106 254884 DEBUG nova.network.neutron [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.204 254884 DEBUG nova.compute.manager [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received event network-changed-386a7730-6a16-4b18-b368-561762a8f7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.205 254884 DEBUG nova.compute.manager [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Refreshing instance network info cache due to event network-changed-386a7730-6a16-4b18-b368-561762a8f7af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.205 254884 DEBUG oslo_concurrency.lockutils [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-c05c1aad-49b9-43df-99b6-602b689d2c8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.241 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:35 compute-0 nova_compute[254880]: 2026-01-26 10:14:35.262 254884 DEBUG nova.network.neutron [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 10:14:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:36] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:14:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:36] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:14:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:36 compute-0 nova_compute[254880]: 2026-01-26 10:14:36.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:36 compute-0 nova_compute[254880]: 2026-01-26 10:14:36.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:14:36 compute-0 nova_compute[254880]: 2026-01-26 10:14:36.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:14:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:37.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.053 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.054 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:14:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:37.166Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.292 254884 DEBUG nova.network.neutron [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Updating instance_info_cache with network_info: [{"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.320 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Releasing lock "refresh_cache-c05c1aad-49b9-43df-99b6-602b689d2c8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.320 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Instance network_info: |[{"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.321 254884 DEBUG oslo_concurrency.lockutils [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-c05c1aad-49b9-43df-99b6-602b689d2c8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.321 254884 DEBUG nova.network.neutron [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Refreshing network info cache for port 386a7730-6a16-4b18-b368-561762a8f7af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.325 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Start _get_guest_xml network_info=[{"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'device_type': 'disk', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'image_id': '6789692f-fc1f-4efa-ae75-dcc13be695ef'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.329 254884 WARNING nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.333 254884 DEBUG nova.virt.libvirt.host [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.334 254884 DEBUG nova.virt.libvirt.host [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.337 254884 DEBUG nova.virt.libvirt.host [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.338 254884 DEBUG nova.virt.libvirt.host [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.338 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.338 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T10:05:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='57e1601b-dbfa-4d3b-8b96-27302e4a7a06',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.339 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.339 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.339 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.339 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.339 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.340 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.340 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.340 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.340 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.341 254884 DEBUG nova.virt.hardware [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.344 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:37 compute-0 ceph-mon[74456]: pgmap v971: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1976849613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:14:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2098853532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.792 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.816 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.820 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:37 compute-0 nova_compute[254880]: 2026-01-26 10:14:37.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:14:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096772364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.297 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.298 254884 DEBUG nova.virt.libvirt.vif [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1518110532',display_name='tempest-TestNetworkBasicOps-server-1518110532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1518110532',id=9,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBYnDhbtyGqpEY46JiIKjPTJn7X7SDbPg9dxMywFFlfcufg39j/xqUFKCoYA/S5N/V7V2wB2/Cd1QuC4xtyvWS4ae02/rbGvNQh2VuaoSIu9BeIZQQ3HO+cbbBgHZD/G2g==',key_name='tempest-TestNetworkBasicOps-1626087452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-bd5018dm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:14:32Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=c05c1aad-49b9-43df-99b6-602b689d2c8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.299 254884 DEBUG nova.network.os_vif_util [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.300 254884 DEBUG nova.network.os_vif_util [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:e4:a1,bridge_name='br-int',has_traffic_filtering=True,id=386a7730-6a16-4b18-b368-561762a8f7af,network=Network(f91dcb4b-184c-45d6-a0e9-285bb6bc3464),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap386a7730-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.301 254884 DEBUG nova.objects.instance [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c05c1aad-49b9-43df-99b6-602b689d2c8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.317 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <uuid>c05c1aad-49b9-43df-99b6-602b689d2c8d</uuid>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <name>instance-00000009</name>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <memory>131072</memory>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <vcpu>1</vcpu>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <nova:name>tempest-TestNetworkBasicOps-server-1518110532</nova:name>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <nova:creationTime>2026-01-26 10:14:37</nova:creationTime>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <nova:flavor name="m1.nano">
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:memory>128</nova:memory>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:disk>1</nova:disk>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:swap>0</nova:swap>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:vcpus>1</nova:vcpus>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       </nova:flavor>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <nova:owner>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       </nova:owner>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <nova:ports>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <nova:port uuid="386a7730-6a16-4b18-b368-561762a8f7af">
Jan 26 10:14:38 compute-0 nova_compute[254880]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         </nova:port>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       </nova:ports>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </nova:instance>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <sysinfo type="smbios">
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <system>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <entry name="manufacturer">RDO</entry>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <entry name="product">OpenStack Compute</entry>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <entry name="serial">c05c1aad-49b9-43df-99b6-602b689d2c8d</entry>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <entry name="uuid">c05c1aad-49b9-43df-99b6-602b689d2c8d</entry>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <entry name="family">Virtual Machine</entry>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </system>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <os>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <boot dev="hd"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <smbios mode="sysinfo"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   </os>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <features>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <vmcoreinfo/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   </features>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <clock offset="utc">
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <timer name="hpet" present="no"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <cpu mode="host-model" match="exact">
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <disk type="network" device="disk">
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/c05c1aad-49b9-43df-99b6-602b689d2c8d_disk">
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       </source>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <target dev="vda" bus="virtio"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <disk type="network" device="cdrom">
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/c05c1aad-49b9-43df-99b6-602b689d2c8d_disk.config">
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       </source>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:14:38 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <target dev="sda" bus="sata"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <interface type="ethernet">
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <mac address="fa:16:3e:d6:e4:a1"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <mtu size="1442"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <target dev="tap386a7730-6a"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <serial type="pty">
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <log file="/var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/console.log" append="off"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <video>
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </video>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <input type="tablet" bus="usb"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <rng model="virtio">
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <backend model="random">/dev/urandom</backend>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <controller type="usb" index="0"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     <memballoon model="virtio">
Jan 26 10:14:38 compute-0 nova_compute[254880]:       <stats period="10"/>
Jan 26 10:14:38 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:14:38 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:14:38 compute-0 nova_compute[254880]: </domain>
Jan 26 10:14:38 compute-0 nova_compute[254880]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.319 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Preparing to wait for external event network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.319 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.319 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.319 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.320 254884 DEBUG nova.virt.libvirt.vif [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1518110532',display_name='tempest-TestNetworkBasicOps-server-1518110532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1518110532',id=9,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBYnDhbtyGqpEY46JiIKjPTJn7X7SDbPg9dxMywFFlfcufg39j/xqUFKCoYA/S5N/V7V2wB2/Cd1QuC4xtyvWS4ae02/rbGvNQh2VuaoSIu9BeIZQQ3HO+cbbBgHZD/G2g==',key_name='tempest-TestNetworkBasicOps-1626087452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-bd5018dm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:14:32Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=c05c1aad-49b9-43df-99b6-602b689d2c8d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.320 254884 DEBUG nova.network.os_vif_util [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.321 254884 DEBUG nova.network.os_vif_util [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:e4:a1,bridge_name='br-int',has_traffic_filtering=True,id=386a7730-6a16-4b18-b368-561762a8f7af,network=Network(f91dcb4b-184c-45d6-a0e9-285bb6bc3464),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap386a7730-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.321 254884 DEBUG os_vif [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:e4:a1,bridge_name='br-int',has_traffic_filtering=True,id=386a7730-6a16-4b18-b368-561762a8f7af,network=Network(f91dcb4b-184c-45d6-a0e9-285bb6bc3464),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap386a7730-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.322 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.322 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.323 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.326 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.326 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap386a7730-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.327 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap386a7730-6a, col_values=(('external_ids', {'iface-id': '386a7730-6a16-4b18-b368-561762a8f7af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:e4:a1', 'vm-uuid': 'c05c1aad-49b9-43df-99b6-602b689d2c8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:38 compute-0 NetworkManager[48970]: <info>  [1769422478.3721] manager: (tap386a7730-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.373 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.375 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.378 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.379 254884 INFO os_vif [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:e4:a1,bridge_name='br-int',has_traffic_filtering=True,id=386a7730-6a16-4b18-b368-561762a8f7af,network=Network(f91dcb4b-184c-45d6-a0e9-285bb6bc3464),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap386a7730-6a')
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.436 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.437 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.437 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No VIF found with MAC fa:16:3e:d6:e4:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.438 254884 INFO nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Using config drive
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.464 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:14:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:14:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:38.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:14:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2833343807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:38 compute-0 ceph-mon[74456]: pgmap v972: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:14:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2098853532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:14:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3798574157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4096772364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:14:38 compute-0 nova_compute[254880]: 2026-01-26 10:14:38.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:39.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.282 254884 INFO nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Creating config drive at /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/disk.config
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.286 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr7a87qf9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.411 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr7a87qf9" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.448 254884 DEBUG nova.storage.rbd_utils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image c05c1aad-49b9-43df-99b6-602b689d2c8d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.452 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/disk.config c05c1aad-49b9-43df-99b6-602b689d2c8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.661 254884 DEBUG oslo_concurrency.processutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/disk.config c05c1aad-49b9-43df-99b6-602b689d2c8d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.662 254884 INFO nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Deleting local config drive /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d/disk.config because it was imported into RBD.
Jan 26 10:14:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:14:39 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 26 10:14:39 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 26 10:14:39 compute-0 kernel: tap386a7730-6a: entered promiscuous mode
Jan 26 10:14:39 compute-0 NetworkManager[48970]: <info>  [1769422479.7572] manager: (tap386a7730-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.804 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:39 compute-0 ovn_controller[155832]: 2026-01-26T10:14:39Z|00058|binding|INFO|Claiming lport 386a7730-6a16-4b18-b368-561762a8f7af for this chassis.
Jan 26 10:14:39 compute-0 ovn_controller[155832]: 2026-01-26T10:14:39Z|00059|binding|INFO|386a7730-6a16-4b18-b368-561762a8f7af: Claiming fa:16:3e:d6:e4:a1 10.100.0.10
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.809 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.816 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:39 compute-0 NetworkManager[48970]: <info>  [1769422479.8200] manager: (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.819 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:39 compute-0 NetworkManager[48970]: <info>  [1769422479.8207] manager: (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.825 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:e4:a1 10.100.0.10'], port_security=['fa:16:3e:d6:e4:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1712540863', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c05c1aad-49b9-43df-99b6-602b689d2c8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1712540863', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '75a6a4cb-bd58-457c-b449-9db5f70f3f78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01b44e6c-3a91-48f0-92f1-3334bccbc3c9, chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=386a7730-6a16-4b18-b368-561762a8f7af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.826 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 386a7730-6a16-4b18-b368-561762a8f7af in datapath f91dcb4b-184c-45d6-a0e9-285bb6bc3464 bound to our chassis
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.828 166625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f91dcb4b-184c-45d6-a0e9-285bb6bc3464
Jan 26 10:14:39 compute-0 systemd-machined[221254]: New machine qemu-3-instance-00000009.
Jan 26 10:14:39 compute-0 systemd-udevd[271768]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.840 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[a3354971-a4fe-4072-94c8-9c61190532bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.841 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf91dcb4b-11 in ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.845 261020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf91dcb4b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.845 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[7b212b2a-a249-4aa5-88de-6f178dc81806]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 NetworkManager[48970]: <info>  [1769422479.8464] device (tap386a7730-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.846 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0460d1-5725-4c21-a170-269d47729b18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 NetworkManager[48970]: <info>  [1769422479.8476] device (tap386a7730-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 10:14:39 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000009.
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.857 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3b5f29-327a-4adc-874e-a6fd06e47b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/767898149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.886 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1afa55-f699-433c-8f97-492054221249]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.906 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.912 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.919 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[988155bf-614c-49ef-893e-5f00d8df8efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 ovn_controller[155832]: 2026-01-26T10:14:39Z|00060|binding|INFO|Setting lport 386a7730-6a16-4b18-b368-561762a8f7af ovn-installed in OVS
Jan 26 10:14:39 compute-0 ovn_controller[155832]: 2026-01-26T10:14:39Z|00061|binding|INFO|Setting lport 386a7730-6a16-4b18-b368-561762a8f7af up in Southbound
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.925 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.925 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[71f200cc-bbce-48c1-beb3-9d2f361be67a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 NetworkManager[48970]: <info>  [1769422479.9272] manager: (tapf91dcb4b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Jan 26 10:14:39 compute-0 nova_compute[254880]: 2026-01-26 10:14:39.953 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.955 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[5128193f-b626-4cc0-b040-8a7fecf11e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.958 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[522c2a18-e62d-46ec-8b5e-4db4b95476e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:39 compute-0 NetworkManager[48970]: <info>  [1769422479.9855] device (tapf91dcb4b-10): carrier: link connected
Jan 26 10:14:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:39.990 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8aafe0-f3d7-4909-910e-f49889e9e3b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.008 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0cd266-f39c-47f1-b62f-fabba4ba8483]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf91dcb4b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:0e:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443364, 'reachable_time': 34562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271801, 'error': None, 'target': 'ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.021 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[3387c926-a434-49c4-b74d-92e00449ac16]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:efb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 443364, 'tstamp': 443364}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271802, 'error': None, 'target': 'ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.039 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fc4d37-dbfe-4ce9-8b65-cd02fbefefdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf91dcb4b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:0e:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443364, 'reachable_time': 34562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271803, 'error': None, 'target': 'ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.072 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[9c95bbd2-b86b-499e-9f4c-8161daecdbc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.122 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e9092b95-c36c-4691-b748-bb9020213320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.123 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf91dcb4b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.123 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.124 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf91dcb4b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.125 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:40 compute-0 NetworkManager[48970]: <info>  [1769422480.1268] manager: (tapf91dcb4b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 26 10:14:40 compute-0 kernel: tapf91dcb4b-10: entered promiscuous mode
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.130 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf91dcb4b-10, col_values=(('external_ids', {'iface-id': '242dde27-5aff-4cac-b664-221ab4bfb94f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.129 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:40 compute-0 ovn_controller[155832]: 2026-01-26T10:14:40Z|00062|binding|INFO|Releasing lport 242dde27-5aff-4cac-b664-221ab4bfb94f from this chassis (sb_readonly=0)
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.130 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.144 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.145 166625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f91dcb4b-184c-45d6-a0e9-285bb6bc3464.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f91dcb4b-184c-45d6-a0e9-285bb6bc3464.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.145 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3e0711-7dc6-4ea6-a04a-283b15330d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.146 166625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: global
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     log         /dev/log local0 debug
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     log-tag     haproxy-metadata-proxy-f91dcb4b-184c-45d6-a0e9-285bb6bc3464
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     user        root
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     group       root
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     maxconn     1024
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     pidfile     /var/lib/neutron/external/pids/f91dcb4b-184c-45d6-a0e9-285bb6bc3464.pid.haproxy
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     daemon
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: defaults
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     log global
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     mode http
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     option httplog
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     option dontlognull
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     option http-server-close
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     option forwardfor
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     retries                 3
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     timeout http-request    30s
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     timeout connect         30s
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     timeout client          32s
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     timeout server          32s
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     timeout http-keep-alive 30s
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: listen listener
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     bind 169.254.169.254:80
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:     http-request add-header X-OVN-Network-ID f91dcb4b-184c-45d6-a0e9-285bb6bc3464
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 10:14:40 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:40.146 166625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'env', 'PROCESS_TAG=haproxy-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f91dcb4b-184c-45d6-a0e9-285bb6bc3464.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.168 254884 DEBUG nova.network.neutron [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Updated VIF entry in instance network info cache for port 386a7730-6a16-4b18-b368-561762a8f7af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.168 254884 DEBUG nova.network.neutron [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Updating instance_info_cache with network_info: [{"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.184 254884 DEBUG oslo_concurrency.lockutils [req-274b3f6f-f692-4395-81c7-af575963ea9b req-d4b80834-52de-4484-8640-81a6b85a3641 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-c05c1aad-49b9-43df-99b6-602b689d2c8d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.361 254884 DEBUG nova.compute.manager [req-0a0cf8c5-7088-4f3a-859a-68f8d4c2bb34 req-19595ceb-78f0-4e57-86d2-7c2372a80b1e b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received event network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.361 254884 DEBUG oslo_concurrency.lockutils [req-0a0cf8c5-7088-4f3a-859a-68f8d4c2bb34 req-19595ceb-78f0-4e57-86d2-7c2372a80b1e b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.361 254884 DEBUG oslo_concurrency.lockutils [req-0a0cf8c5-7088-4f3a-859a-68f8d4c2bb34 req-19595ceb-78f0-4e57-86d2-7c2372a80b1e b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.362 254884 DEBUG oslo_concurrency.lockutils [req-0a0cf8c5-7088-4f3a-859a-68f8d4c2bb34 req-19595ceb-78f0-4e57-86d2-7c2372a80b1e b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.362 254884 DEBUG nova.compute.manager [req-0a0cf8c5-7088-4f3a-859a-68f8d4c2bb34 req-19595ceb-78f0-4e57-86d2-7c2372a80b1e b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Processing event network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.480 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.482 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422480.479742, c05c1aad-49b9-43df-99b6-602b689d2c8d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.482 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] VM Started (Lifecycle Event)
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.485 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.488 254884 INFO nova.virt.libvirt.driver [-] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Instance spawned successfully.
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.488 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.517 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.522 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:14:40 compute-0 podman[271876]: 2026-01-26 10:14:40.524894244 +0000 UTC m=+0.081510899 container create 6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.526 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.528 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.529 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.529 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.530 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.531 254884 DEBUG nova.virt.libvirt.driver [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:14:40 compute-0 podman[271876]: 2026-01-26 10:14:40.466738493 +0000 UTC m=+0.023355168 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 10:14:40 compute-0 systemd[1]: Started libpod-conmon-6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124.scope.
Jan 26 10:14:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5380ecc77a687dc67c6e564df4151b83b951b13597c106e72f747fe387a337/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 10:14:40 compute-0 podman[271876]: 2026-01-26 10:14:40.615474995 +0000 UTC m=+0.172091670 container init 6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 10:14:40 compute-0 podman[271876]: 2026-01-26 10:14:40.620615824 +0000 UTC m=+0.177232469 container start 6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 10:14:40 compute-0 neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464[271893]: [NOTICE]   (271897) : New worker (271899) forked
Jan 26 10:14:40 compute-0 neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464[271893]: [NOTICE]   (271897) : Loading success.
Jan 26 10:14:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.834 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.835 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422480.4799418, c05c1aad-49b9-43df-99b6-602b689d2c8d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.835 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] VM Paused (Lifecycle Event)
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.863 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.868 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422480.4847035, c05c1aad-49b9-43df-99b6-602b689d2c8d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.868 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] VM Resumed (Lifecycle Event)
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.874 254884 INFO nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Took 8.64 seconds to spawn the instance on the hypervisor.
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.875 254884 DEBUG nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.884 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.889 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.909 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.934 254884 INFO nova.compute.manager [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Took 9.53 seconds to build instance.
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.950 254884 DEBUG oslo_concurrency.lockutils [None req-ee76568f-420a-4a9d-b204-bccc6591fd6c c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:40 compute-0 nova_compute[254880]: 2026-01-26 10:14:40.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:41.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:41 compute-0 ceph-mon[74456]: pgmap v973: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:14:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=cleanup t=2026-01-26T10:14:41.422996981Z level=info msg="Completed cleanup jobs" duration=24.605721ms
Jan 26 10:14:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafana.update.checker t=2026-01-26T10:14:41.521742621Z level=info msg="Update check succeeded" duration=51.609225ms
Jan 26 10:14:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=plugins.update.checker t=2026-01-26T10:14:41.522005639Z level=info msg="Update check succeeded" duration=51.826991ms
Jan 26 10:14:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:14:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:42 compute-0 ceph-mon[74456]: pgmap v974: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.476 254884 DEBUG nova.compute.manager [req-0f6e3a62-9c10-4659-adb2-9fd3c5a3296d req-ad908afd-0381-4393-bb2f-12a3c824609a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received event network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.477 254884 DEBUG oslo_concurrency.lockutils [req-0f6e3a62-9c10-4659-adb2-9fd3c5a3296d req-ad908afd-0381-4393-bb2f-12a3c824609a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.477 254884 DEBUG oslo_concurrency.lockutils [req-0f6e3a62-9c10-4659-adb2-9fd3c5a3296d req-ad908afd-0381-4393-bb2f-12a3c824609a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.477 254884 DEBUG oslo_concurrency.lockutils [req-0f6e3a62-9c10-4659-adb2-9fd3c5a3296d req-ad908afd-0381-4393-bb2f-12a3c824609a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.478 254884 DEBUG nova.compute.manager [req-0f6e3a62-9c10-4659-adb2-9fd3c5a3296d req-ad908afd-0381-4393-bb2f-12a3c824609a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] No waiting events found dispatching network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.478 254884 WARNING nova.compute.manager [req-0f6e3a62-9c10-4659-adb2-9fd3c5a3296d req-ad908afd-0381-4393-bb2f-12a3c824609a b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received unexpected event network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af for instance with vm_state active and task_state deleting.
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.479 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.480 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.480 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.480 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.481 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.482 254884 INFO nova.compute.manager [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Terminating instance
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.483 254884 DEBUG nova.compute.manager [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 10:14:42 compute-0 kernel: tap386a7730-6a (unregistering): left promiscuous mode
Jan 26 10:14:42 compute-0 NetworkManager[48970]: <info>  [1769422482.5256] device (tap386a7730-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 10:14:42 compute-0 ovn_controller[155832]: 2026-01-26T10:14:42Z|00063|binding|INFO|Releasing lport 386a7730-6a16-4b18-b368-561762a8f7af from this chassis (sb_readonly=0)
Jan 26 10:14:42 compute-0 ovn_controller[155832]: 2026-01-26T10:14:42Z|00064|binding|INFO|Setting lport 386a7730-6a16-4b18-b368-561762a8f7af down in Southbound
Jan 26 10:14:42 compute-0 ovn_controller[155832]: 2026-01-26T10:14:42Z|00065|binding|INFO|Removing iface tap386a7730-6a ovn-installed in OVS
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.534 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.536 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.540 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:e4:a1 10.100.0.10'], port_security=['fa:16:3e:d6:e4:a1 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1712540863', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c05c1aad-49b9-43df-99b6-602b689d2c8d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1712540863', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '9', 'neutron:security_group_ids': '75a6a4cb-bd58-457c-b449-9db5f70f3f78', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.178', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01b44e6c-3a91-48f0-92f1-3334bccbc3c9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=386a7730-6a16-4b18-b368-561762a8f7af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.542 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 386a7730-6a16-4b18-b368-561762a8f7af in datapath f91dcb4b-184c-45d6-a0e9-285bb6bc3464 unbound from our chassis
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.543 166625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f91dcb4b-184c-45d6-a0e9-285bb6bc3464, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.545 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[20a77fef-b76e-4953-8bb5-04ee0ce742fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.545 166625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464 namespace which is not needed anymore
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.554 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 26 10:14:42 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Consumed 2.749s CPU time.
Jan 26 10:14:42 compute-0 systemd-machined[221254]: Machine qemu-3-instance-00000009 terminated.
Jan 26 10:14:42 compute-0 neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464[271893]: [NOTICE]   (271897) : haproxy version is 2.8.14-c23fe91
Jan 26 10:14:42 compute-0 neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464[271893]: [NOTICE]   (271897) : path to executable is /usr/sbin/haproxy
Jan 26 10:14:42 compute-0 neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464[271893]: [WARNING]  (271897) : Exiting Master process...
Jan 26 10:14:42 compute-0 neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464[271893]: [ALERT]    (271897) : Current worker (271899) exited with code 143 (Terminated)
Jan 26 10:14:42 compute-0 neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464[271893]: [WARNING]  (271897) : All workers exited. Exiting... (0)
Jan 26 10:14:42 compute-0 systemd[1]: libpod-6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124.scope: Deactivated successfully.
Jan 26 10:14:42 compute-0 podman[271932]: 2026-01-26 10:14:42.668615256 +0000 UTC m=+0.040971282 container died 6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 10:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124-userdata-shm.mount: Deactivated successfully.
Jan 26 10:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b5380ecc77a687dc67c6e564df4151b83b951b13597c106e72f747fe387a337-merged.mount: Deactivated successfully.
Jan 26 10:14:42 compute-0 NetworkManager[48970]: <info>  [1769422482.7024] manager: (tap386a7730-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Jan 26 10:14:42 compute-0 podman[271932]: 2026-01-26 10:14:42.702746242 +0000 UTC m=+0.075102268 container cleanup 6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.718 254884 INFO nova.virt.libvirt.driver [-] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Instance destroyed successfully.
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.720 254884 DEBUG nova.objects.instance [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'resources' on Instance uuid c05c1aad-49b9-43df-99b6-602b689d2c8d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:14:42 compute-0 systemd[1]: libpod-conmon-6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124.scope: Deactivated successfully.
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.745 254884 DEBUG nova.virt.libvirt.vif [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1518110532',display_name='tempest-TestNetworkBasicOps-server-1518110532',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1518110532',id=9,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBYnDhbtyGqpEY46JiIKjPTJn7X7SDbPg9dxMywFFlfcufg39j/xqUFKCoYA/S5N/V7V2wB2/Cd1QuC4xtyvWS4ae02/rbGvNQh2VuaoSIu9BeIZQQ3HO+cbbBgHZD/G2g==',key_name='tempest-TestNetworkBasicOps-1626087452',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:14:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-bd5018dm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:14:40Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=c05c1aad-49b9-43df-99b6-602b689d2c8d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.746 254884 DEBUG nova.network.os_vif_util [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "386a7730-6a16-4b18-b368-561762a8f7af", "address": "fa:16:3e:d6:e4:a1", "network": {"id": "f91dcb4b-184c-45d6-a0e9-285bb6bc3464", "bridge": "br-int", "label": "tempest-network-smoke--753987758", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap386a7730-6a", "ovs_interfaceid": "386a7730-6a16-4b18-b368-561762a8f7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.747 254884 DEBUG nova.network.os_vif_util [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:e4:a1,bridge_name='br-int',has_traffic_filtering=True,id=386a7730-6a16-4b18-b368-561762a8f7af,network=Network(f91dcb4b-184c-45d6-a0e9-285bb6bc3464),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap386a7730-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.747 254884 DEBUG os_vif [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:e4:a1,bridge_name='br-int',has_traffic_filtering=True,id=386a7730-6a16-4b18-b368-561762a8f7af,network=Network(f91dcb4b-184c-45d6-a0e9-285bb6bc3464),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap386a7730-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.749 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.749 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap386a7730-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.751 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:42.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.754 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.756 254884 INFO os_vif [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:e4:a1,bridge_name='br-int',has_traffic_filtering=True,id=386a7730-6a16-4b18-b368-561762a8f7af,network=Network(f91dcb4b-184c-45d6-a0e9-285bb6bc3464),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap386a7730-6a')
Jan 26 10:14:42 compute-0 podman[271972]: 2026-01-26 10:14:42.788874233 +0000 UTC m=+0.052541851 container remove 6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.795 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[36d634e1-e584-4bd1-8f2b-ce98c4fc26fd]: (4, ('Mon Jan 26 10:14:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464 (6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124)\n6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124\nMon Jan 26 10:14:42 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464 (6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124)\n6d6cdbecb8285f3fbe1d1d1a4d641d03647c145cd9222b92b06e4b1dd700e124\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.798 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e711da98-193e-4903-a26e-b9444643af4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.799 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf91dcb4b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:14:42 compute-0 kernel: tapf91dcb4b-10: left promiscuous mode
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.800 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.803 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.807 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb9b490-5239-4bfc-894b-c00bbe299740]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.818 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.823 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[a2839a5f-0c25-46da-ab24-433950d6cd5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.824 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[4236e26e-2e9f-4302-9c85-c0286154fdd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.839 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfb63bb-265c-47ab-bea3-80b687440471]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 443357, 'reachable_time': 44902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272004, 'error': None, 'target': 'ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.841 167020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f91dcb4b-184c-45d6-a0e9-285bb6bc3464 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 10:14:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:42.841 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed8045a-04d4-4368-a675-4b747bd0d9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:14:42 compute-0 systemd[1]: run-netns-ovnmeta\x2df91dcb4b\x2d184c\x2d45d6\x2da0e9\x2d285bb6bc3464.mount: Deactivated successfully.
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:14:42 compute-0 nova_compute[254880]: 2026-01-26 10:14:42.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:14:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:43.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:43 compute-0 nova_compute[254880]: 2026-01-26 10:14:43.397 254884 INFO nova.virt.libvirt.driver [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Deleting instance files /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d_del
Jan 26 10:14:43 compute-0 nova_compute[254880]: 2026-01-26 10:14:43.398 254884 INFO nova.virt.libvirt.driver [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Deletion of /var/lib/nova/instances/c05c1aad-49b9-43df-99b6-602b689d2c8d_del complete
Jan 26 10:14:43 compute-0 nova_compute[254880]: 2026-01-26 10:14:43.442 254884 INFO nova.compute.manager [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 26 10:14:43 compute-0 nova_compute[254880]: 2026-01-26 10:14:43.442 254884 DEBUG oslo.service.loopingcall [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 10:14:43 compute-0 nova_compute[254880]: 2026-01-26 10:14:43.442 254884 DEBUG nova.compute.manager [-] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 10:14:43 compute-0 nova_compute[254880]: 2026-01-26 10:14:43.443 254884 DEBUG nova.network.neutron [-] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 10:14:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.550 254884 DEBUG nova.compute.manager [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received event network-vif-unplugged-386a7730-6a16-4b18-b368-561762a8f7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.550 254884 DEBUG oslo_concurrency.lockutils [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.550 254884 DEBUG oslo_concurrency.lockutils [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.550 254884 DEBUG oslo_concurrency.lockutils [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.551 254884 DEBUG nova.compute.manager [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] No waiting events found dispatching network-vif-unplugged-386a7730-6a16-4b18-b368-561762a8f7af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.551 254884 DEBUG nova.compute.manager [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received event network-vif-unplugged-386a7730-6a16-4b18-b368-561762a8f7af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.551 254884 DEBUG nova.compute.manager [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received event network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.551 254884 DEBUG oslo_concurrency.lockutils [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.551 254884 DEBUG oslo_concurrency.lockutils [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.552 254884 DEBUG oslo_concurrency.lockutils [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.552 254884 DEBUG nova.compute.manager [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] No waiting events found dispatching network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.552 254884 WARNING nova.compute.manager [req-a9863f4d-175a-4546-b71d-78c6249059cd req-e10cc611-159b-4d2e-b85a-f60a0955568f b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Received unexpected event network-vif-plugged-386a7730-6a16-4b18-b368-561762a8f7af for instance with vm_state active and task_state deleting.
Jan 26 10:14:44 compute-0 sudo[272011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:14:44 compute-0 sudo[272011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:14:44 compute-0 sudo[272011]: pam_unix(sudo:session): session closed for user root
Jan 26 10:14:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:44.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:44 compute-0 ceph-mon[74456]: pgmap v975: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 26 10:14:44 compute-0 nova_compute[254880]: 2026-01-26 10:14:44.817 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:45 compute-0 nova_compute[254880]: 2026-01-26 10:14:45.547 254884 DEBUG nova.network.neutron [-] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:14:45 compute-0 nova_compute[254880]: 2026-01-26 10:14:45.568 254884 INFO nova.compute.manager [-] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Took 2.13 seconds to deallocate network for instance.
Jan 26 10:14:45 compute-0 nova_compute[254880]: 2026-01-26 10:14:45.611 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:45 compute-0 nova_compute[254880]: 2026-01-26 10:14:45.611 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:45 compute-0 nova_compute[254880]: 2026-01-26 10:14:45.663 254884 DEBUG oslo_concurrency.processutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:14:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 10:14:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:14:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239229588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:46 compute-0 nova_compute[254880]: 2026-01-26 10:14:46.123 254884 DEBUG oslo_concurrency.processutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:14:46 compute-0 nova_compute[254880]: 2026-01-26 10:14:46.129 254884 DEBUG nova.compute.provider_tree [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:14:46 compute-0 nova_compute[254880]: 2026-01-26 10:14:46.152 254884 DEBUG nova.scheduler.client.report [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:14:46 compute-0 nova_compute[254880]: 2026-01-26 10:14:46.180 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:46 compute-0 nova_compute[254880]: 2026-01-26 10:14:46.219 254884 INFO nova.scheduler.client.report [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Deleted allocations for instance c05c1aad-49b9-43df-99b6-602b689d2c8d
Jan 26 10:14:46 compute-0 nova_compute[254880]: 2026-01-26 10:14:46.282 254884 DEBUG oslo_concurrency.lockutils [None req-20c61150-edb1-4c5c-aefd-64a7df9e2ee8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "c05c1aad-49b9-43df-99b6-602b689d2c8d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:46] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:14:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:46] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Jan 26 10:14:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:46.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:47.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:14:47 compute-0 ceph-mon[74456]: pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 10:14:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4239229588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:14:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:47 compute-0 nova_compute[254880]: 2026-01-26 10:14:47.753 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:48 compute-0 ceph-mon[74456]: pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:14:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:48.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:14:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:14:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:14:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:14:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:14:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:14:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:49.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:49 compute-0 podman[272063]: 2026-01-26 10:14:49.153311966 +0000 UTC m=+0.071234534 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 10:14:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:14:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:49 compute-0 nova_compute[254880]: 2026-01-26 10:14:49.841 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:50 compute-0 ceph-mon[74456]: pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 26 10:14:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:50.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Jan 26 10:14:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:52 compute-0 nova_compute[254880]: 2026-01-26 10:14:52.755 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:52.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:52 compute-0 ceph-mon[74456]: pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Jan 26 10:14:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:14:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:53.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:14:53 compute-0 nova_compute[254880]: 2026-01-26 10:14:53.644 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Jan 26 10:14:53 compute-0 nova_compute[254880]: 2026-01-26 10:14:53.723 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:54.699 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:14:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:54.700 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:14:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:14:54.700 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:14:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:54.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:54 compute-0 nova_compute[254880]: 2026-01-26 10:14:54.842 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:54 compute-0 ceph-mon[74456]: pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Jan 26 10:14:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:55.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Jan 26 10:14:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:56] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:14:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:14:56] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Jan 26 10:14:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:56.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:56 compute-0 ceph-mon[74456]: pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Jan 26 10:14:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:14:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:14:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:14:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:14:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:14:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:57.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:14:57.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:14:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:57 compute-0 nova_compute[254880]: 2026-01-26 10:14:57.716 254884 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769422482.715547, c05c1aad-49b9-43df-99b6-602b689d2c8d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:14:57 compute-0 nova_compute[254880]: 2026-01-26 10:14:57.717 254884 INFO nova.compute.manager [-] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] VM Stopped (Lifecycle Event)
Jan 26 10:14:57 compute-0 nova_compute[254880]: 2026-01-26 10:14:57.750 254884 DEBUG nova.compute.manager [None req-90301dcf-130f-49a5-b067-1271ca9f3c17 - - - - - -] [instance: c05c1aad-49b9-43df-99b6-602b689d2c8d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:14:57 compute-0 nova_compute[254880]: 2026-01-26 10:14:57.757 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:14:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:14:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:14:58.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:58 compute-0 ceph-mon[74456]: pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:14:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/96791862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:14:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/96791862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:14:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:14:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:14:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:14:59.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:14:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:14:59 compute-0 nova_compute[254880]: 2026-01-26 10:14:59.844 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:00 compute-0 ceph-mon[74456]: pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:15:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:02 compute-0 nova_compute[254880]: 2026-01-26 10:15:02.761 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:02.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:03 compute-0 ceph-mon[74456]: pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:03 compute-0 podman[272100]: 2026-01-26 10:15:03.177129949 +0000 UTC m=+0.100727315 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 10:15:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:15:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:04.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:04 compute-0 sudo[272129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:15:04 compute-0 sudo[272129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:04 compute-0 sudo[272129]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:04 compute-0 nova_compute[254880]: 2026-01-26 10:15:04.846 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:05.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:05 compute-0 ceph-mon[74456]: pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:15:06 compute-0 ceph-mon[74456]: pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:15:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:15:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:06] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:15:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:06.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:07.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:15:07.170Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:15:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:15:07.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:15:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:07 compute-0 nova_compute[254880]: 2026-01-26 10:15:07.764 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:08.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:08 compute-0 ceph-mon[74456]: pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:09.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:15:09 compute-0 nova_compute[254880]: 2026-01-26 10:15:09.885 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:10 compute-0 ceph-mon[74456]: pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:15:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:10.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:11.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:12 compute-0 nova_compute[254880]: 2026-01-26 10:15:12.766 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:12.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:12 compute-0 ceph-mon[74456]: pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:12 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1075107949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:13.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:14.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:14 compute-0 nova_compute[254880]: 2026-01-26 10:15:14.887 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:14 compute-0 ceph-mon[74456]: pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:15:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:15.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:15:15 compute-0 sshd-session[272164]: Invalid user zabbix from 157.245.76.178 port 47080
Jan 26 10:15:15 compute-0 sshd-session[272164]: Connection closed by invalid user zabbix 157.245.76.178 port 47080 [preauth]
Jan 26 10:15:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:15:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:16] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 26 10:15:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:16.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:16 compute-0 ceph-mon[74456]: pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:15:16 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/559084737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:15:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:17.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:15:17.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:15:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:15:17 compute-0 nova_compute[254880]: 2026-01-26 10:15:17.768 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:18 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3526497005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:15:18
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', 'vms', '.nfs']
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:15:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:18.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:15:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:15:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:15:19 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:15:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 26 10:15:19 compute-0 ceph-mon[74456]: pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:15:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:19 compute-0 nova_compute[254880]: 2026-01-26 10:15:19.888 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:20 compute-0 podman[272170]: 2026-01-26 10:15:20.15613114 +0000 UTC m=+0.083540044 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:15:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:20.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:20 compute-0 ceph-mon[74456]: pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 26 10:15:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:21.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:15:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:22 compute-0 ceph-mon[74456]: pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:15:22 compute-0 nova_compute[254880]: 2026-01-26 10:15:22.771 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:22.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:23.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=infra.usagestats t=2026-01-26T10:15:23.42241038Z level=info msg="Usage stats are ready to report"
Jan 26 10:15:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:15:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:24.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:24 compute-0 sudo[272195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:15:24 compute-0 sudo[272195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:24 compute-0 sudo[272195]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:24 compute-0 nova_compute[254880]: 2026-01-26 10:15:24.889 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:25.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:25 compute-0 ceph-mon[74456]: pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 26 10:15:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:15:25 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:15:25.968 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:15:25 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:15:25.969 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:15:25 compute-0 nova_compute[254880]: 2026-01-26 10:15:25.993 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:26] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 26 10:15:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:26] "GET /metrics HTTP/1.1" 200 48484 "" "Prometheus/2.51.0"
Jan 26 10:15:26 compute-0 ceph-mon[74456]: pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:15:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:26.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:27.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:15:27.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:15:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:15:27 compute-0 nova_compute[254880]: 2026-01-26 10:15:27.774 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:27 compute-0 sudo[272222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:15:27 compute-0 sudo[272222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:27 compute-0 sudo[272222]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:27 compute-0 sudo[272247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:15:27 compute-0 sudo[272247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:28 compute-0 sudo[272247]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:28 compute-0 sudo[272302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:15:28 compute-0 sudo[272302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:28 compute-0 sudo[272302]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:28 compute-0 sudo[272328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 26 10:15:28 compute-0 sudo[272328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:28 compute-0 sudo[272328]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:28 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:15:28 compute-0 ovn_controller[155832]: 2026-01-26T10:15:28Z|00066|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 26 10:15:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:29.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:29 compute-0 ceph-mon[74456]: pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:15:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 26 10:15:29 compute-0 nova_compute[254880]: 2026-01-26 10:15:29.891 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:29 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:15:29.971 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:15:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:15:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 10:15:30 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 10:15:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:31 compute-0 ceph-mon[74456]: pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 26 10:15:31 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:31 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:31 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 10:15:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:31.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:15:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:32 compute-0 ceph-mon[74456]: pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:15:32 compute-0 nova_compute[254880]: 2026-01-26 10:15:32.777 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:32.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:32 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:15:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:15:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:33.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:15:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:15:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:15:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:15:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:33 compute-0 nova_compute[254880]: 2026-01-26 10:15:33.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:33 compute-0 nova_compute[254880]: 2026-01-26 10:15:33.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:33 compute-0 nova_compute[254880]: 2026-01-26 10:15:33.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.058 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.059 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.059 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.059 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.060 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:15:34 compute-0 podman[272374]: 2026-01-26 10:15:34.145028613 +0000 UTC m=+0.079978349 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:15:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:15:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751436258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.572 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:15:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:34 compute-0 ceph-mon[74456]: pgmap v1000: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 26 10:15:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.740 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.742 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4563MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.742 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.742 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:15:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:34.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.816 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.816 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.831 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:15:34 compute-0 nova_compute[254880]: 2026-01-26 10:15:34.893 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:15:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:35.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265833480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:35 compute-0 nova_compute[254880]: 2026-01-26 10:15:35.397 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:15:35 compute-0 nova_compute[254880]: 2026-01-26 10:15:35.403 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:15:35 compute-0 nova_compute[254880]: 2026-01-26 10:15:35.450 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:15:35 compute-0 nova_compute[254880]: 2026-01-26 10:15:35.474 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:15:35 compute-0 nova_compute[254880]: 2026-01-26 10:15:35.474 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:15:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 113 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:15:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2751436258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/265833480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:35 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:15:35 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:15:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 113 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.5 MiB/s wr, 62 op/s
Jan 26 10:15:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:15:36 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 26 10:15:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:36] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:15:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:36] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:15:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:36.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:37.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:15:37.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:15:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:15:37 compute-0 ceph-mon[74456]: pgmap v1001: 353 pgs: 353 active+clean; 113 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 26 10:15:37 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:37 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:37 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 10:15:37 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:37 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 10:15:37 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:15:37 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:15:37 compute-0 ceph-mon[74456]: pgmap v1002: 353 pgs: 353 active+clean; 113 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.5 MiB/s wr, 62 op/s
Jan 26 10:15:37 compute-0 ceph-mon[74456]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 26 10:15:37 compute-0 nova_compute[254880]: 2026-01-26 10:15:37.779 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:15:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:15:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:15:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:15:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:15:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:15:37 compute-0 sudo[272449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:15:37 compute-0 sudo[272449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:37 compute-0 sudo[272449]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 113 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.5 MiB/s wr, 62 op/s
Jan 26 10:15:37 compute-0 sudo[272474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:15:37 compute-0 sudo[272474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:38 compute-0 podman[272541]: 2026-01-26 10:15:38.352826396 +0000 UTC m=+0.050713115 container create 51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:15:38 compute-0 systemd[1]: Started libpod-conmon-51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446.scope.
Jan 26 10:15:38 compute-0 podman[272541]: 2026-01-26 10:15:38.326309307 +0000 UTC m=+0.024196046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:15:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:15:38 compute-0 podman[272541]: 2026-01-26 10:15:38.46250545 +0000 UTC m=+0.160392189 container init 51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:15:38 compute-0 podman[272541]: 2026-01-26 10:15:38.470560112 +0000 UTC m=+0.168446831 container start 51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 26 10:15:38 compute-0 fervent_goodall[272557]: 167 167
Jan 26 10:15:38 compute-0 systemd[1]: libpod-51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446.scope: Deactivated successfully.
Jan 26 10:15:38 compute-0 nova_compute[254880]: 2026-01-26 10:15:38.475 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:38 compute-0 nova_compute[254880]: 2026-01-26 10:15:38.476 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:15:38 compute-0 nova_compute[254880]: 2026-01-26 10:15:38.476 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:15:38 compute-0 podman[272541]: 2026-01-26 10:15:38.489381469 +0000 UTC m=+0.187268188 container attach 51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 10:15:38 compute-0 podman[272541]: 2026-01-26 10:15:38.489730749 +0000 UTC m=+0.187617468 container died 51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:15:38 compute-0 nova_compute[254880]: 2026-01-26 10:15:38.490 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:15:38 compute-0 nova_compute[254880]: 2026-01-26 10:15:38.490 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f594a681cc3428169d614b9629a35c3dc0d7dfa7b07dd3c5b79d8a1a4c6af1-merged.mount: Deactivated successfully.
Jan 26 10:15:38 compute-0 podman[272541]: 2026-01-26 10:15:38.725300604 +0000 UTC m=+0.423187313 container remove 51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:15:38 compute-0 systemd[1]: libpod-conmon-51b948a0391d74479e14e3d586a0a21efd7ac7162b2c9b4de1fef8ffa760c446.scope: Deactivated successfully.
Jan 26 10:15:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:38.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:38 compute-0 podman[272583]: 2026-01-26 10:15:38.893633331 +0000 UTC m=+0.043088206 container create 596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_blackwell, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 10:15:38 compute-0 systemd[1]: Started libpod-conmon-596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c.scope.
Jan 26 10:15:38 compute-0 nova_compute[254880]: 2026-01-26 10:15:38.968 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:38 compute-0 podman[272583]: 2026-01-26 10:15:38.877229279 +0000 UTC m=+0.026684084 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:15:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e76f0e2b316cff0799b3fbf7b60b984d2e4ccc7c81f9d0e0dd5ae24c456d27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:15:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:15:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:15:38 compute-0 ceph-mon[74456]: pgmap v1003: 353 pgs: 353 active+clean; 113 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.5 MiB/s wr, 62 op/s
Jan 26 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e76f0e2b316cff0799b3fbf7b60b984d2e4ccc7c81f9d0e0dd5ae24c456d27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e76f0e2b316cff0799b3fbf7b60b984d2e4ccc7c81f9d0e0dd5ae24c456d27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e76f0e2b316cff0799b3fbf7b60b984d2e4ccc7c81f9d0e0dd5ae24c456d27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90e76f0e2b316cff0799b3fbf7b60b984d2e4ccc7c81f9d0e0dd5ae24c456d27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:39 compute-0 podman[272583]: 2026-01-26 10:15:38.999872111 +0000 UTC m=+0.149326886 container init 596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:15:39 compute-0 podman[272583]: 2026-01-26 10:15:39.009542777 +0000 UTC m=+0.158997542 container start 596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_blackwell, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 26 10:15:39 compute-0 podman[272583]: 2026-01-26 10:15:39.013559548 +0000 UTC m=+0.163014323 container attach 596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_blackwell, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:15:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:39.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:39 compute-0 tender_blackwell[272600]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:15:39 compute-0 tender_blackwell[272600]: --> All data devices are unavailable
Jan 26 10:15:39 compute-0 systemd[1]: libpod-596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c.scope: Deactivated successfully.
Jan 26 10:15:39 compute-0 podman[272615]: 2026-01-26 10:15:39.425296905 +0000 UTC m=+0.027459196 container died 596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_blackwell, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 10:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-90e76f0e2b316cff0799b3fbf7b60b984d2e4ccc7c81f9d0e0dd5ae24c456d27-merged.mount: Deactivated successfully.
Jan 26 10:15:39 compute-0 nova_compute[254880]: 2026-01-26 10:15:39.895 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:39 compute-0 podman[272615]: 2026-01-26 10:15:39.901026682 +0000 UTC m=+0.503188863 container remove 596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_blackwell, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:15:39 compute-0 systemd[1]: libpod-conmon-596cd7fa64043c1819717e3c2382d5bd55d104e860ae10beabd7111815c0a47c.scope: Deactivated successfully.
Jan 26 10:15:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Jan 26 10:15:39 compute-0 sudo[272474]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:40 compute-0 sudo[272630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:15:40 compute-0 sudo[272630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:40 compute-0 sudo[272630]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:40 compute-0 sudo[272655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:15:40 compute-0 sudo[272655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:40 compute-0 ceph-mon[74456]: pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Jan 26 10:15:40 compute-0 podman[272721]: 2026-01-26 10:15:40.537412385 +0000 UTC m=+0.050534880 container create f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:15:40 compute-0 systemd[1]: Started libpod-conmon-f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070.scope.
Jan 26 10:15:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:15:40 compute-0 podman[272721]: 2026-01-26 10:15:40.516248053 +0000 UTC m=+0.029370558 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:15:40 compute-0 podman[272721]: 2026-01-26 10:15:40.678933975 +0000 UTC m=+0.192056550 container init f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:15:40 compute-0 podman[272721]: 2026-01-26 10:15:40.687152621 +0000 UTC m=+0.200275116 container start f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:15:40 compute-0 hopeful_jackson[272738]: 167 167
Jan 26 10:15:40 compute-0 systemd[1]: libpod-f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070.scope: Deactivated successfully.
Jan 26 10:15:40 compute-0 podman[272721]: 2026-01-26 10:15:40.705354741 +0000 UTC m=+0.218477226 container attach f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:15:40 compute-0 podman[272721]: 2026-01-26 10:15:40.706118683 +0000 UTC m=+0.219241198 container died f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5770e0ebffc976b1838708fd60266220b73cb5248bcd28b13a10a647362b10c3-merged.mount: Deactivated successfully.
Jan 26 10:15:40 compute-0 podman[272721]: 2026-01-26 10:15:40.75332398 +0000 UTC m=+0.266446465 container remove f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:15:40 compute-0 systemd[1]: libpod-conmon-f3448fb8bc4545d5e52115e166d7d8572df583d206befee5961f91e97c020070.scope: Deactivated successfully.
Jan 26 10:15:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:40.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:40 compute-0 nova_compute[254880]: 2026-01-26 10:15:40.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:40 compute-0 nova_compute[254880]: 2026-01-26 10:15:40.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:40 compute-0 nova_compute[254880]: 2026-01-26 10:15:40.960 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:41 compute-0 podman[272765]: 2026-01-26 10:15:40.926421478 +0000 UTC m=+0.027948219 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:15:41 compute-0 podman[272765]: 2026-01-26 10:15:41.026410906 +0000 UTC m=+0.127937597 container create 83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_engelbart, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Jan 26 10:15:41 compute-0 systemd[1]: Started libpod-conmon-83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a.scope.
Jan 26 10:15:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:15:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:41.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e0b7e649d715e25e0c920a04003e990092636e6abe8c1e4161be78d59cd921/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e0b7e649d715e25e0c920a04003e990092636e6abe8c1e4161be78d59cd921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e0b7e649d715e25e0c920a04003e990092636e6abe8c1e4161be78d59cd921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e0b7e649d715e25e0c920a04003e990092636e6abe8c1e4161be78d59cd921/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:41 compute-0 podman[272765]: 2026-01-26 10:15:41.122068476 +0000 UTC m=+0.223595187 container init 83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 26 10:15:41 compute-0 podman[272765]: 2026-01-26 10:15:41.132145373 +0000 UTC m=+0.233672064 container start 83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Jan 26 10:15:41 compute-0 podman[272765]: 2026-01-26 10:15:41.135559006 +0000 UTC m=+0.237085787 container attach 83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:15:41 compute-0 determined_engelbart[272781]: {
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:     "0": [
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:         {
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "devices": [
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "/dev/loop3"
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             ],
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "lv_name": "ceph_lv0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "lv_size": "21470642176",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "name": "ceph_lv0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "tags": {
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.cluster_name": "ceph",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.crush_device_class": "",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.encrypted": "0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.osd_id": "0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.type": "block",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.vdo": "0",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:                 "ceph.with_tpm": "0"
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             },
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "type": "block",
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:             "vg_name": "ceph_vg0"
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:         }
Jan 26 10:15:41 compute-0 determined_engelbart[272781]:     ]
Jan 26 10:15:41 compute-0 determined_engelbart[272781]: }
Jan 26 10:15:41 compute-0 systemd[1]: libpod-83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a.scope: Deactivated successfully.
Jan 26 10:15:41 compute-0 podman[272765]: 2026-01-26 10:15:41.509082014 +0000 UTC m=+0.610608725 container died 83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 10:15:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Jan 26 10:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e0b7e649d715e25e0c920a04003e990092636e6abe8c1e4161be78d59cd921-merged.mount: Deactivated successfully.
Jan 26 10:15:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:42 compute-0 podman[272765]: 2026-01-26 10:15:42.064209764 +0000 UTC m=+1.165736455 container remove 83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:15:42 compute-0 systemd[1]: libpod-conmon-83826b2e77def9e293f337534952f125f9625e2eee80addf2615eb72bf2ef65a.scope: Deactivated successfully.
Jan 26 10:15:42 compute-0 sudo[272655]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:42 compute-0 sudo[272801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:15:42 compute-0 sudo[272801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:42 compute-0 sudo[272801]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:42 compute-0 sudo[272826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:15:42 compute-0 sudo[272826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/552703975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3497306287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:42 compute-0 podman[272893]: 2026-01-26 10:15:42.632775363 +0000 UTC m=+0.023845527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:15:42 compute-0 podman[272893]: 2026-01-26 10:15:42.766319443 +0000 UTC m=+0.157389587 container create 4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:15:42 compute-0 nova_compute[254880]: 2026-01-26 10:15:42.781 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:42.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:42 compute-0 systemd[1]: Started libpod-conmon-4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf.scope.
Jan 26 10:15:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:15:43 compute-0 podman[272893]: 2026-01-26 10:15:43.009103236 +0000 UTC m=+0.400173400 container init 4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:15:43 compute-0 podman[272893]: 2026-01-26 10:15:43.021542628 +0000 UTC m=+0.412612772 container start 4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:15:43 compute-0 podman[272893]: 2026-01-26 10:15:43.025673972 +0000 UTC m=+0.416744256 container attach 4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:15:43 compute-0 elastic_antonelli[272911]: 167 167
Jan 26 10:15:43 compute-0 systemd[1]: libpod-4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf.scope: Deactivated successfully.
Jan 26 10:15:43 compute-0 podman[272893]: 2026-01-26 10:15:43.031138272 +0000 UTC m=+0.422208416 container died 4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8afaf5052bf96c8b36fcfbc40ea7a4087c5257fa03c60548cca9f68e6cb2f4c0-merged.mount: Deactivated successfully.
Jan 26 10:15:43 compute-0 podman[272893]: 2026-01-26 10:15:43.075396689 +0000 UTC m=+0.466466823 container remove 4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:15:43 compute-0 systemd[1]: libpod-conmon-4613b9434ac576a84274c3b2fd5663e96d6341961ff9f1f55a13d8295a4a00bf.scope: Deactivated successfully.
Jan 26 10:15:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:15:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:43.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:15:43 compute-0 podman[272936]: 2026-01-26 10:15:43.221238008 +0000 UTC m=+0.025355438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:15:43 compute-0 podman[272936]: 2026-01-26 10:15:43.559607309 +0000 UTC m=+0.363724719 container create 108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:15:43 compute-0 ceph-mon[74456]: pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Jan 26 10:15:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1678315008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:43 compute-0 systemd[1]: Started libpod-conmon-108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d.scope.
Jan 26 10:15:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e12250e746988eb0c5a8845daf9ff24f4f14d8d04fb385953169ac0434e776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e12250e746988eb0c5a8845daf9ff24f4f14d8d04fb385953169ac0434e776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e12250e746988eb0c5a8845daf9ff24f4f14d8d04fb385953169ac0434e776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e12250e746988eb0c5a8845daf9ff24f4f14d8d04fb385953169ac0434e776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:15:43 compute-0 podman[272936]: 2026-01-26 10:15:43.634274751 +0000 UTC m=+0.438392191 container init 108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:15:43 compute-0 podman[272936]: 2026-01-26 10:15:43.64114393 +0000 UTC m=+0.445261340 container start 108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 10:15:43 compute-0 podman[272936]: 2026-01-26 10:15:43.644282416 +0000 UTC m=+0.448399826 container attach 108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 10:15:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Jan 26 10:15:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:44 compute-0 lvm[273026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:15:44 compute-0 lvm[273026]: VG ceph_vg0 finished
Jan 26 10:15:44 compute-0 festive_payne[272952]: {}
Jan 26 10:15:44 compute-0 systemd[1]: libpod-108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d.scope: Deactivated successfully.
Jan 26 10:15:44 compute-0 systemd[1]: libpod-108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d.scope: Consumed 1.112s CPU time.
Jan 26 10:15:44 compute-0 podman[272936]: 2026-01-26 10:15:44.335459985 +0000 UTC m=+1.139577385 container died 108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e12250e746988eb0c5a8845daf9ff24f4f14d8d04fb385953169ac0434e776-merged.mount: Deactivated successfully.
Jan 26 10:15:44 compute-0 podman[272936]: 2026-01-26 10:15:44.712052047 +0000 UTC m=+1.516169457 container remove 108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_payne, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:15:44 compute-0 sudo[272826]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:15:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1121211806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:44 compute-0 ceph-mon[74456]: pgmap v1006: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Jan 26 10:15:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:15:44 compute-0 systemd[1]: libpod-conmon-108a315b71bd7ed6fc363ffc5db047cad773e336f4063918da83ab8351f18d0d.scope: Deactivated successfully.
Jan 26 10:15:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:44.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:44 compute-0 sudo[273045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:15:44 compute-0 sudo[273045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:44 compute-0 sudo[273045]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:44 compute-0 nova_compute[254880]: 2026-01-26 10:15:44.896 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:44 compute-0 nova_compute[254880]: 2026-01-26 10:15:44.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:15:44 compute-0 nova_compute[254880]: 2026-01-26 10:15:44.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:15:44 compute-0 sudo[273070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:15:44 compute-0 sudo[273070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:15:44 compute-0 sudo[273070]: pam_unix(sudo:session): session closed for user root
Jan 26 10:15:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:45.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 38 KiB/s wr, 47 op/s
Jan 26 10:15:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:46] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:15:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:46] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Jan 26 10:15:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:46.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:46 compute-0 ceph-mon[74456]: pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 38 KiB/s wr, 47 op/s
Jan 26 10:15:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3004217025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:15:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:47.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:15:47.173Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:15:47 compute-0 nova_compute[254880]: 2026-01-26 10:15:47.786 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 32 KiB/s wr, 40 op/s
Jan 26 10:15:48 compute-0 ceph-mon[74456]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 32 KiB/s wr, 40 op/s
Jan 26 10:15:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 26 10:15:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:15:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:15:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:15:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:15:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:48.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:15:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:15:48 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:15:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:49.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:49 compute-0 nova_compute[254880]: 2026-01-26 10:15:49.898 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 32 KiB/s wr, 41 op/s
Jan 26 10:15:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:15:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:15:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:51 compute-0 ceph-mon[74456]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 32 KiB/s wr, 41 op/s
Jan 26 10:15:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:51.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:51 compute-0 podman[273102]: 2026-01-26 10:15:51.122125146 +0000 UTC m=+0.055832746 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 10:15:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 29 op/s
Jan 26 10:15:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:52 compute-0 ceph-mon[74456]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 29 op/s
Jan 26 10:15:52 compute-0 nova_compute[254880]: 2026-01-26 10:15:52.789 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:52.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:53.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 29 op/s
Jan 26 10:15:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:15:54.700 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:15:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:15:54.701 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:15:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:15:54.701 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:15:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:54.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:54 compute-0 nova_compute[254880]: 2026-01-26 10:15:54.901 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:55.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:55 compute-0 ceph-mon[74456]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 29 op/s
Jan 26 10:15:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 30 op/s
Jan 26 10:15:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:56] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:15:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:15:56] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:15:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:15:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:15:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:15:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:15:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:15:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:15:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:15:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:57.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:57 compute-0 ceph-mon[74456]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 30 op/s
Jan 26 10:15:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:15:57.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:15:57 compute-0 sshd-session[273128]: Invalid user zabbix from 157.245.76.178 port 36856
Jan 26 10:15:57 compute-0 sshd-session[273128]: Connection closed by invalid user zabbix 157.245.76.178 port 36856 [preauth]
Jan 26 10:15:57 compute-0 nova_compute[254880]: 2026-01-26 10:15:57.791 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 26 10:15:58 compute-0 ceph-mon[74456]: pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 26 10:15:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1968492371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:15:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1968492371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:15:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:15:58.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:15:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:15:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:15:59.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:15:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:15:59 compute-0 nova_compute[254880]: 2026-01-26 10:15:59.902 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:15:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s
Jan 26 10:16:00 compute-0 ceph-mon[74456]: pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s
Jan 26 10:16:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:00.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:01.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:16:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:02 compute-0 nova_compute[254880]: 2026-01-26 10:16:02.794 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:02.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:03 compute-0 ceph-mon[74456]: pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:16:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:03.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:16:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:16:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:04.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:04 compute-0 nova_compute[254880]: 2026-01-26 10:16:04.953 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:05 compute-0 ceph-mon[74456]: pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:16:05 compute-0 sudo[273138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:16:05 compute-0 sudo[273138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:05 compute-0 sudo[273138]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:05.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:05 compute-0 podman[273144]: 2026-01-26 10:16:05.166095995 +0000 UTC m=+0.086313363 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:16:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:16:06 compute-0 ceph-mon[74456]: pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:16:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:06] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:16:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:06] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:16:06 compute-0 nova_compute[254880]: 2026-01-26 10:16:06.848 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:06 compute-0 nova_compute[254880]: 2026-01-26 10:16:06.848 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:06 compute-0 nova_compute[254880]: 2026-01-26 10:16:06.872 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 10:16:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:06 compute-0 nova_compute[254880]: 2026-01-26 10:16:06.949 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:06 compute-0 nova_compute[254880]: 2026-01-26 10:16:06.949 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:06 compute-0 nova_compute[254880]: 2026-01-26 10:16:06.956 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 10:16:06 compute-0 nova_compute[254880]: 2026-01-26 10:16:06.957 254884 INFO nova.compute.claims [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Claim successful on node compute-0.ctlplane.example.com
Jan 26 10:16:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.086 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:07.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:16:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:07.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:16:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:07.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:16:07 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:16:07 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603851357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.540 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.546 254884 DEBUG nova.compute.provider_tree [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.575 254884 DEBUG nova.scheduler.client.report [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:16:07 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/603851357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.605 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.605 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.659 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.659 254884 DEBUG nova.network.neutron [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.680 254884 INFO nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.707 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.798 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.822 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.823 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.824 254884 INFO nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Creating image(s)
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.850 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.877 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.903 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.907 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.969 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.970 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "d81880e926e175d0cc7241caa7cc18231a8a289c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.971 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:07 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.971 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:07.999 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.004 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.232 254884 DEBUG nova.policy [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1208d3e25b940ea93fe76884c7a53db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.283 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.356 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] resizing rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.462 254884 DEBUG nova.objects.instance [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 95d9d3cd-1887-4125-b0e7-2252b73dbe82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.477 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.477 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Ensure instance console log exists: /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.478 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.478 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:08 compute-0 nova_compute[254880]: 2026-01-26 10:16:08.478 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:08 compute-0 ceph-mon[74456]: pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:16:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:08.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:09.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:09 compute-0 nova_compute[254880]: 2026-01-26 10:16:09.336 254884 DEBUG nova.network.neutron [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Successfully created port: 1cdd4fc2-81a5-488e-820c-586ca6c12d57 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 10:16:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:09 compute-0 nova_compute[254880]: 2026-01-26 10:16:09.955 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.399 254884 DEBUG nova.network.neutron [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Successfully updated port: 1cdd4fc2-81a5-488e-820c-586ca6c12d57 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.640 254884 DEBUG nova.compute.manager [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.641 254884 DEBUG nova.compute.manager [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing instance network info cache due to event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.641 254884 DEBUG oslo_concurrency.lockutils [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.641 254884 DEBUG oslo_concurrency.lockutils [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.641 254884 DEBUG nova.network.neutron [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.643 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:16:10 compute-0 nova_compute[254880]: 2026-01-26 10:16:10.775 254884 DEBUG nova.network.neutron [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 10:16:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:10.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:11.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:11 compute-0 ceph-mon[74456]: pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:11 compute-0 nova_compute[254880]: 2026-01-26 10:16:11.681 254884 DEBUG nova.network.neutron [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:16:11 compute-0 nova_compute[254880]: 2026-01-26 10:16:11.723 254884 DEBUG oslo_concurrency.lockutils [req-11032ff6-16c0-4adf-8343-01c635ef55cd req-eb991a05-7125-4ba4-80b0-4e39e878354c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:16:11 compute-0 nova_compute[254880]: 2026-01-26 10:16:11.724 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquired lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:16:11 compute-0 nova_compute[254880]: 2026-01-26 10:16:11.724 254884 DEBUG nova.network.neutron [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 10:16:11 compute-0 nova_compute[254880]: 2026-01-26 10:16:11.871 254884 DEBUG nova.network.neutron [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 10:16:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:12 compute-0 nova_compute[254880]: 2026-01-26 10:16:12.750 254884 DEBUG nova.network.neutron [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:16:12 compute-0 nova_compute[254880]: 2026-01-26 10:16:12.800 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:13.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.185 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Releasing lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.185 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Instance network_info: |[{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 10:16:13 compute-0 ceph-mon[74456]: pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.188 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Start _get_guest_xml network_info=[{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'device_type': 'disk', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'image_id': '6789692f-fc1f-4efa-ae75-dcc13be695ef'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.192 254884 WARNING nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.196 254884 DEBUG nova.virt.libvirt.host [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.196 254884 DEBUG nova.virt.libvirt.host [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.199 254884 DEBUG nova.virt.libvirt.host [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.199 254884 DEBUG nova.virt.libvirt.host [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.200 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.200 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T10:05:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='57e1601b-dbfa-4d3b-8b96-27302e4a7a06',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.201 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.201 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.201 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.201 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.201 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.202 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.202 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.202 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.202 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.203 254884 DEBUG nova.virt.hardware [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.205 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:13 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:16:13 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448725082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.653 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.690 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:16:13 compute-0 nova_compute[254880]: 2026-01-26 10:16:13.694 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:16:14 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197192949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.131 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.133 254884 DEBUG nova.virt.libvirt.vif [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1423011292',display_name='tempest-TestNetworkBasicOps-server-1423011292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1423011292',id=11,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKs6r9vrmOLoMp7qP9DSziD19MyulJ4WxkGq32T5oMGA9YlFhrc8KR+CrRlK7gHjttZpWpF8q1BhU3cfWPT3YhBD4pYVF8/xqjrmceUzbapOQ0G+qVqOkZvdNryYHhvSMg==',key_name='tempest-TestNetworkBasicOps-1227421461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-y2l003zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:16:07Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=95d9d3cd-1887-4125-b0e7-2252b73dbe82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.134 254884 DEBUG nova.network.os_vif_util [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.135 254884 DEBUG nova.network.os_vif_util [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:fb:ae,bridge_name='br-int',has_traffic_filtering=True,id=1cdd4fc2-81a5-488e-820c-586ca6c12d57,network=Network(9bff64e0-694f-4b2d-b4b5-5e3b1d94460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cdd4fc2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.136 254884 DEBUG nova.objects.instance [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95d9d3cd-1887-4125-b0e7-2252b73dbe82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.155 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] End _get_guest_xml xml=<domain type="kvm">
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <uuid>95d9d3cd-1887-4125-b0e7-2252b73dbe82</uuid>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <name>instance-0000000b</name>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <memory>131072</memory>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <vcpu>1</vcpu>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <nova:name>tempest-TestNetworkBasicOps-server-1423011292</nova:name>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <nova:creationTime>2026-01-26 10:16:13</nova:creationTime>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <nova:flavor name="m1.nano">
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:memory>128</nova:memory>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:disk>1</nova:disk>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:swap>0</nova:swap>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:vcpus>1</nova:vcpus>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       </nova:flavor>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <nova:owner>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       </nova:owner>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <nova:ports>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <nova:port uuid="1cdd4fc2-81a5-488e-820c-586ca6c12d57">
Jan 26 10:16:14 compute-0 nova_compute[254880]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         </nova:port>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       </nova:ports>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </nova:instance>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <sysinfo type="smbios">
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <system>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <entry name="manufacturer">RDO</entry>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <entry name="product">OpenStack Compute</entry>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <entry name="serial">95d9d3cd-1887-4125-b0e7-2252b73dbe82</entry>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <entry name="uuid">95d9d3cd-1887-4125-b0e7-2252b73dbe82</entry>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <entry name="family">Virtual Machine</entry>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </system>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <os>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <boot dev="hd"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <smbios mode="sysinfo"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   </os>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <features>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <vmcoreinfo/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   </features>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <clock offset="utc">
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <timer name="hpet" present="no"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <cpu mode="host-model" match="exact">
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <disk type="network" device="disk">
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk">
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       </source>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <target dev="vda" bus="virtio"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <disk type="network" device="cdrom">
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk.config">
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       </source>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:16:14 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <target dev="sda" bus="sata"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <interface type="ethernet">
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <mac address="fa:16:3e:c2:fb:ae"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <mtu size="1442"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <target dev="tap1cdd4fc2-81"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <serial type="pty">
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <log file="/var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/console.log" append="off"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <video>
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </video>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <input type="tablet" bus="usb"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <rng model="virtio">
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <backend model="random">/dev/urandom</backend>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <controller type="usb" index="0"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     <memballoon model="virtio">
Jan 26 10:16:14 compute-0 nova_compute[254880]:       <stats period="10"/>
Jan 26 10:16:14 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:16:14 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:16:14 compute-0 nova_compute[254880]: </domain>
Jan 26 10:16:14 compute-0 nova_compute[254880]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.156 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Preparing to wait for external event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.156 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.156 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.156 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.157 254884 DEBUG nova.virt.libvirt.vif [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1423011292',display_name='tempest-TestNetworkBasicOps-server-1423011292',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1423011292',id=11,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKs6r9vrmOLoMp7qP9DSziD19MyulJ4WxkGq32T5oMGA9YlFhrc8KR+CrRlK7gHjttZpWpF8q1BhU3cfWPT3YhBD4pYVF8/xqjrmceUzbapOQ0G+qVqOkZvdNryYHhvSMg==',key_name='tempest-TestNetworkBasicOps-1227421461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-y2l003zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:16:07Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=95d9d3cd-1887-4125-b0e7-2252b73dbe82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.157 254884 DEBUG nova.network.os_vif_util [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.158 254884 DEBUG nova.network.os_vif_util [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:fb:ae,bridge_name='br-int',has_traffic_filtering=True,id=1cdd4fc2-81a5-488e-820c-586ca6c12d57,network=Network(9bff64e0-694f-4b2d-b4b5-5e3b1d94460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cdd4fc2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.158 254884 DEBUG os_vif [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:fb:ae,bridge_name='br-int',has_traffic_filtering=True,id=1cdd4fc2-81a5-488e-820c-586ca6c12d57,network=Network(9bff64e0-694f-4b2d-b4b5-5e3b1d94460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cdd4fc2-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.159 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.159 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.160 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.162 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.162 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cdd4fc2-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.162 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cdd4fc2-81, col_values=(('external_ids', {'iface-id': '1cdd4fc2-81a5-488e-820c-586ca6c12d57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:fb:ae', 'vm-uuid': '95d9d3cd-1887-4125-b0e7-2252b73dbe82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.163 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:14 compute-0 NetworkManager[48970]: <info>  [1769422574.1651] manager: (tap1cdd4fc2-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.167 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.170 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.171 254884 INFO os_vif [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:fb:ae,bridge_name='br-int',has_traffic_filtering=True,id=1cdd4fc2-81a5-488e-820c-586ca6c12d57,network=Network(9bff64e0-694f-4b2d-b4b5-5e3b1d94460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cdd4fc2-81')
Jan 26 10:16:14 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/448725082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:16:14 compute-0 ceph-mon[74456]: pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:14 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3197192949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.263 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.263 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.264 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No VIF found with MAC fa:16:3e:c2:fb:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.264 254884 INFO nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Using config drive
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.294 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:16:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:14.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:14 compute-0 nova_compute[254880]: 2026-01-26 10:16:14.958 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:15.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.026 254884 INFO nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Creating config drive at /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/disk.config
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.031 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7mre8m_7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.158 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7mre8m_7" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.198 254884 DEBUG nova.storage.rbd_utils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.203 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/disk.config 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.374 254884 DEBUG oslo_concurrency.processutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/disk.config 95d9d3cd-1887-4125-b0e7-2252b73dbe82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.375 254884 INFO nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Deleting local config drive /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82/disk.config because it was imported into RBD.
Jan 26 10:16:16 compute-0 kernel: tap1cdd4fc2-81: entered promiscuous mode
Jan 26 10:16:16 compute-0 NetworkManager[48970]: <info>  [1769422576.4444] manager: (tap1cdd4fc2-81): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.448 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 ovn_controller[155832]: 2026-01-26T10:16:16Z|00067|binding|INFO|Claiming lport 1cdd4fc2-81a5-488e-820c-586ca6c12d57 for this chassis.
Jan 26 10:16:16 compute-0 ovn_controller[155832]: 2026-01-26T10:16:16Z|00068|binding|INFO|1cdd4fc2-81a5-488e-820c-586ca6c12d57: Claiming fa:16:3e:c2:fb:ae 10.100.0.11
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.451 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.454 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.467 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:fb:ae 10.100.0.11'], port_security=['fa:16:3e:c2:fb:ae 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '95d9d3cd-1887-4125-b0e7-2252b73dbe82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e066970b-3668-485e-a8ee-d7788d42c06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eda71-392f-4d4b-8724-78530674037e, chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=1cdd4fc2-81a5-488e-820c-586ca6c12d57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.468 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 1cdd4fc2-81a5-488e-820c-586ca6c12d57 in datapath 9bff64e0-694f-4b2d-b4b5-5e3b1d94460e bound to our chassis
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.470 166625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bff64e0-694f-4b2d-b4b5-5e3b1d94460e
Jan 26 10:16:16 compute-0 systemd-machined[221254]: New machine qemu-4-instance-0000000b.
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.481 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[8360437e-90c2-46ab-8415-80c281a11176]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.482 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bff64e0-61 in ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.483 261020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bff64e0-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.483 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[76130a47-bad8-4407-9ad6-f626998003d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.484 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3f7634-b506-434b-91d1-c5eed70b75fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.497 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[0caa7458-06bc-4e1c-b750-a461df54e964]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Jan 26 10:16:16 compute-0 systemd-udevd[273525]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.523 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.522 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[752ccdc3-ae7f-41c7-8bd3-d4a8fce25f3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_controller[155832]: 2026-01-26T10:16:16Z|00069|binding|INFO|Setting lport 1cdd4fc2-81a5-488e-820c-586ca6c12d57 ovn-installed in OVS
Jan 26 10:16:16 compute-0 ovn_controller[155832]: 2026-01-26T10:16:16Z|00070|binding|INFO|Setting lport 1cdd4fc2-81a5-488e-820c-586ca6c12d57 up in Southbound
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.528 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 NetworkManager[48970]: <info>  [1769422576.5335] device (tap1cdd4fc2-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 10:16:16 compute-0 NetworkManager[48970]: <info>  [1769422576.5349] device (tap1cdd4fc2-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.553 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[66c14adb-7702-46e3-a23c-b22f95c95949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.558 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[af5232d5-9358-4c98-923d-11a30e693391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 NetworkManager[48970]: <info>  [1769422576.5597] manager: (tap9bff64e0-60): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.589 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[1196633a-a77b-40a9-ab0f-11e8b442631e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.592 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[02e505eb-df47-4332-a68a-9f708fc90db8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 NetworkManager[48970]: <info>  [1769422576.6111] device (tap9bff64e0-60): carrier: link connected
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.616 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a8f846-dd5e-49fd-9753-71abfc3fb893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.630 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[de71ccb3-805b-431f-b942-246add13bef5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bff64e0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453026, 'reachable_time': 33322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273555, 'error': None, 'target': 'ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:16] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:16:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:16] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.642 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[bb023b0a-ee78-4f3d-8c5a-27109a863893]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:7f67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453026, 'tstamp': 453026}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273556, 'error': None, 'target': 'ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.658 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[58cca39d-6d08-458d-aac6-0b38eff8c281]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bff64e0-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:7f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453026, 'reachable_time': 33322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273557, 'error': None, 'target': 'ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.682 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[0eae3a2c-4da7-4f0a-a6fb-ee12c5258267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.728 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[cde02c86-6f8d-4c9b-9b0f-3b740da94497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.729 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bff64e0-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.729 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.730 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bff64e0-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.732 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 NetworkManager[48970]: <info>  [1769422576.7327] manager: (tap9bff64e0-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 26 10:16:16 compute-0 kernel: tap9bff64e0-60: entered promiscuous mode
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.738 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.738 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bff64e0-60, col_values=(('external_ids', {'iface-id': '58fa2dc8-9a67-4ebd-8c74-a3dee5be3d64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:16:16 compute-0 ovn_controller[155832]: 2026-01-26T10:16:16Z|00071|binding|INFO|Releasing lport 58fa2dc8-9a67-4ebd-8c74-a3dee5be3d64 from this chassis (sb_readonly=0)
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.770 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 nova_compute[254880]: 2026-01-26 10:16:16.773 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.774 166625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bff64e0-694f-4b2d-b4b5-5e3b1d94460e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bff64e0-694f-4b2d-b4b5-5e3b1d94460e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.775 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[b5820347-b622-4cfa-80d4-71b0a39a27c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.775 166625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: global
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     log         /dev/log local0 debug
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     log-tag     haproxy-metadata-proxy-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     user        root
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     group       root
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     maxconn     1024
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     pidfile     /var/lib/neutron/external/pids/9bff64e0-694f-4b2d-b4b5-5e3b1d94460e.pid.haproxy
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     daemon
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: defaults
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     log global
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     mode http
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     option httplog
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     option dontlognull
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     option http-server-close
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     option forwardfor
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     retries                 3
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     timeout http-request    30s
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     timeout connect         30s
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     timeout client          32s
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     timeout server          32s
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     timeout http-keep-alive 30s
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: listen listener
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     bind 169.254.169.254:80
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:     http-request add-header X-OVN-Network-ID 9bff64e0-694f-4b2d-b4b5-5e3b1d94460e
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 10:16:16 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:16.776 166625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'env', 'PROCESS_TAG=haproxy-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bff64e0-694f-4b2d-b4b5-5e3b1d94460e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 10:16:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:16.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:16 compute-0 ceph-mon[74456]: pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 10:16:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.023 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422577.022771, 95d9d3cd-1887-4125-b0e7-2252b73dbe82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.024 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] VM Started (Lifecycle Event)
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.055 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.060 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422577.022968, 95d9d3cd-1887-4125-b0e7-2252b73dbe82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.060 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] VM Paused (Lifecycle Event)
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.081 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.085 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.115 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:16:17 compute-0 podman[273633]: 2026-01-26 10:16:17.136418662 +0000 UTC m=+0.046436518 container create ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:16:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:17.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:17 compute-0 systemd[1]: Started libpod-conmon-ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094.scope.
Jan 26 10:16:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:17.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:16:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c388cb48ee58c27ce7c8a92350d0710e5829f3a72e1cf571ec646bef6410ec1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:17 compute-0 podman[273633]: 2026-01-26 10:16:17.110785687 +0000 UTC m=+0.020803583 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 10:16:17 compute-0 podman[273633]: 2026-01-26 10:16:17.215988219 +0000 UTC m=+0.126006105 container init ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:16:17 compute-0 podman[273633]: 2026-01-26 10:16:17.220504044 +0000 UTC m=+0.130521930 container start ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 10:16:17 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [NOTICE]   (273652) : New worker (273654) forked
Jan 26 10:16:17 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [NOTICE]   (273652) : Loading success.
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.426 254884 DEBUG nova.compute.manager [req-ed7b8fc9-25e0-4f42-8ddb-f964f0a92bad req-532f0fa3-58db-42a3-82b7-fa834abc5520 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.427 254884 DEBUG oslo_concurrency.lockutils [req-ed7b8fc9-25e0-4f42-8ddb-f964f0a92bad req-532f0fa3-58db-42a3-82b7-fa834abc5520 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.427 254884 DEBUG oslo_concurrency.lockutils [req-ed7b8fc9-25e0-4f42-8ddb-f964f0a92bad req-532f0fa3-58db-42a3-82b7-fa834abc5520 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.427 254884 DEBUG oslo_concurrency.lockutils [req-ed7b8fc9-25e0-4f42-8ddb-f964f0a92bad req-532f0fa3-58db-42a3-82b7-fa834abc5520 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.428 254884 DEBUG nova.compute.manager [req-ed7b8fc9-25e0-4f42-8ddb-f964f0a92bad req-532f0fa3-58db-42a3-82b7-fa834abc5520 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Processing event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.428 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.433 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422577.4330528, 95d9d3cd-1887-4125-b0e7-2252b73dbe82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.433 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] VM Resumed (Lifecycle Event)
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.435 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.438 254884 INFO nova.virt.libvirt.driver [-] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Instance spawned successfully.
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.438 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.466 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.469 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.477 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.477 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.477 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.478 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.478 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.478 254884 DEBUG nova.virt.libvirt.driver [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.501 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.540 254884 INFO nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Took 9.72 seconds to spawn the instance on the hypervisor.
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.540 254884 DEBUG nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.608 254884 INFO nova.compute.manager [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Took 10.69 seconds to build instance.
Jan 26 10:16:17 compute-0 nova_compute[254880]: 2026-01-26 10:16:17.630 254884 DEBUG oslo_concurrency.lockutils [None req-66bb12ca-4be1-4c05-8328-bb324bf5b535 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:16:18
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', '.nfs', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta']
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:16:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:16:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:16:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:16:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:18.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:19 compute-0 ceph-mon[74456]: pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:16:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:19.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.164 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:16:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.682 254884 DEBUG nova.compute.manager [req-ff5355bd-a3a2-48fb-9c4d-e4f732f93cc9 req-d8bc75e5-9e61-4591-ae7c-31c082d608ee b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.683 254884 DEBUG oslo_concurrency.lockutils [req-ff5355bd-a3a2-48fb-9c4d-e4f732f93cc9 req-d8bc75e5-9e61-4591-ae7c-31c082d608ee b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.683 254884 DEBUG oslo_concurrency.lockutils [req-ff5355bd-a3a2-48fb-9c4d-e4f732f93cc9 req-d8bc75e5-9e61-4591-ae7c-31c082d608ee b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.683 254884 DEBUG oslo_concurrency.lockutils [req-ff5355bd-a3a2-48fb-9c4d-e4f732f93cc9 req-d8bc75e5-9e61-4591-ae7c-31c082d608ee b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.683 254884 DEBUG nova.compute.manager [req-ff5355bd-a3a2-48fb-9c4d-e4f732f93cc9 req-d8bc75e5-9e61-4591-ae7c-31c082d608ee b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] No waiting events found dispatching network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.684 254884 WARNING nova.compute.manager [req-ff5355bd-a3a2-48fb-9c4d-e4f732f93cc9 req-d8bc75e5-9e61-4591-ae7c-31c082d608ee b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received unexpected event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 for instance with vm_state active and task_state None.
Jan 26 10:16:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:16:19 compute-0 nova_compute[254880]: 2026-01-26 10:16:19.960 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:20 compute-0 ceph-mon[74456]: pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:16:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:21.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:21 compute-0 nova_compute[254880]: 2026-01-26 10:16:21.450 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:21 compute-0 NetworkManager[48970]: <info>  [1769422581.4537] manager: (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 26 10:16:21 compute-0 NetworkManager[48970]: <info>  [1769422581.4546] manager: (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 26 10:16:21 compute-0 ovn_controller[155832]: 2026-01-26T10:16:21Z|00072|binding|INFO|Releasing lport 58fa2dc8-9a67-4ebd-8c74-a3dee5be3d64 from this chassis (sb_readonly=0)
Jan 26 10:16:21 compute-0 nova_compute[254880]: 2026-01-26 10:16:21.487 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:21 compute-0 ovn_controller[155832]: 2026-01-26T10:16:21Z|00073|binding|INFO|Releasing lport 58fa2dc8-9a67-4ebd-8c74-a3dee5be3d64 from this chassis (sb_readonly=0)
Jan 26 10:16:21 compute-0 nova_compute[254880]: 2026-01-26 10:16:21.492 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:16:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:22 compute-0 nova_compute[254880]: 2026-01-26 10:16:22.056 254884 DEBUG nova.compute.manager [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:16:22 compute-0 nova_compute[254880]: 2026-01-26 10:16:22.057 254884 DEBUG nova.compute.manager [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing instance network info cache due to event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:16:22 compute-0 nova_compute[254880]: 2026-01-26 10:16:22.057 254884 DEBUG oslo_concurrency.lockutils [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:16:22 compute-0 nova_compute[254880]: 2026-01-26 10:16:22.057 254884 DEBUG oslo_concurrency.lockutils [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:16:22 compute-0 nova_compute[254880]: 2026-01-26 10:16:22.057 254884 DEBUG nova.network.neutron [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:16:22 compute-0 podman[273668]: 2026-01-26 10:16:22.129684827 +0000 UTC m=+0.058395917 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:16:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:22.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:23 compute-0 ceph-mon[74456]: pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:16:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:23.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:23 compute-0 nova_compute[254880]: 2026-01-26 10:16:23.617 254884 DEBUG nova.network.neutron [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updated VIF entry in instance network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:16:23 compute-0 nova_compute[254880]: 2026-01-26 10:16:23.618 254884 DEBUG nova.network.neutron [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:16:23 compute-0 nova_compute[254880]: 2026-01-26 10:16:23.640 254884 DEBUG oslo_concurrency.lockutils [req-3da45b68-3402-427c-ba58-6207d0360b45 req-0a2d184c-554e-4515-b6da-77165c8f34a7 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:16:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:16:24 compute-0 nova_compute[254880]: 2026-01-26 10:16:24.165 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:24 compute-0 nova_compute[254880]: 2026-01-26 10:16:24.961 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:25 compute-0 ceph-mon[74456]: pgmap v1026: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:16:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:25.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:25 compute-0 sudo[273691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:16:25 compute-0 sudo[273691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:25 compute-0 sudo[273691]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 26 10:16:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:26] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Jan 26 10:16:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:26] "GET /metrics HTTP/1.1" 200 48552 "" "Prometheus/2.51.0"
Jan 26 10:16:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:26.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:27 compute-0 ceph-mon[74456]: pgmap v1027: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 26 10:16:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:16:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:27.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:16:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:27.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:16:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:16:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:28.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:29 compute-0 ceph-mon[74456]: pgmap v1028: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:16:29 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2689113532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:29 compute-0 nova_compute[254880]: 2026-01-26 10:16:29.166 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:29.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 155 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 146 op/s
Jan 26 10:16:29 compute-0 nova_compute[254880]: 2026-01-26 10:16:29.963 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:30 compute-0 ceph-mon[74456]: pgmap v1029: 353 pgs: 353 active+clean; 155 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 146 op/s
Jan 26 10:16:30 compute-0 ovn_controller[155832]: 2026-01-26T10:16:30Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:fb:ae 10.100.0.11
Jan 26 10:16:30 compute-0 ovn_controller[155832]: 2026-01-26T10:16:30Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:fb:ae 10.100.0.11
Jan 26 10:16:30 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:30.482 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:16:30 compute-0 nova_compute[254880]: 2026-01-26 10:16:30.483 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:30 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:30.484 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:16:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:30.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:16:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:31.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:16:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 155 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 3.8 MiB/s wr, 71 op/s
Jan 26 10:16:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:32.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:33 compute-0 ceph-mon[74456]: pgmap v1030: 353 pgs: 353 active+clean; 155 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 3.8 MiB/s wr, 71 op/s
Jan 26 10:16:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:33.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:16:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 155 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 3.8 MiB/s wr, 71 op/s
Jan 26 10:16:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1830823020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:16:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:34 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3710592775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:16:34 compute-0 ceph-mon[74456]: pgmap v1031: 353 pgs: 353 active+clean; 155 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 3.8 MiB/s wr, 71 op/s
Jan 26 10:16:34 compute-0 nova_compute[254880]: 2026-01-26 10:16:34.168 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:34.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:34 compute-0 nova_compute[254880]: 2026-01-26 10:16:34.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:34 compute-0 nova_compute[254880]: 2026-01-26 10:16:34.964 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.014 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.014 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.014 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.015 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.015 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:16:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220884915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.463 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3220884915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.586 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.586 254884 DEBUG nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 10:16:35 compute-0 podman[273748]: 2026-01-26 10:16:35.612410579 +0000 UTC m=+0.105164053 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.738 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.740 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4336MB free_disk=59.92290115356445GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.740 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.741 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.931 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Instance 95d9d3cd-1887-4125-b0e7-2252b73dbe82 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.931 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:16:35 compute-0 nova_compute[254880]: 2026-01-26 10:16:35.932 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:16:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 26 10:16:36 compute-0 nova_compute[254880]: 2026-01-26 10:16:36.029 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:16:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:16:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613783079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:36 compute-0 nova_compute[254880]: 2026-01-26 10:16:36.473 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:16:36 compute-0 nova_compute[254880]: 2026-01-26 10:16:36.479 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:16:36 compute-0 nova_compute[254880]: 2026-01-26 10:16:36.496 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:16:36 compute-0 nova_compute[254880]: 2026-01-26 10:16:36.517 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:16:36 compute-0 nova_compute[254880]: 2026-01-26 10:16:36.517 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:36] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 26 10:16:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:36] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 26 10:16:36 compute-0 ceph-mon[74456]: pgmap v1032: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 26 10:16:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/613783079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:36.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:37.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:16:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:37.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:16:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:37 compute-0 nova_compute[254880]: 2026-01-26 10:16:37.518 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:37 compute-0 nova_compute[254880]: 2026-01-26 10:16:37.520 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:16:37 compute-0 nova_compute[254880]: 2026-01-26 10:16:37.520 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:16:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1154178246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 26 10:16:38 compute-0 nova_compute[254880]: 2026-01-26 10:16:38.243 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:16:38 compute-0 nova_compute[254880]: 2026-01-26 10:16:38.244 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquired lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:16:38 compute-0 nova_compute[254880]: 2026-01-26 10:16:38.244 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 10:16:38 compute-0 nova_compute[254880]: 2026-01-26 10:16:38.244 254884 DEBUG nova.objects.instance [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95d9d3cd-1887-4125-b0e7-2252b73dbe82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:16:38 compute-0 ceph-mon[74456]: pgmap v1033: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 26 10:16:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1410766536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:38 compute-0 ceph-mgr[74755]: [dashboard INFO request] [192.168.122.100:52696] [POST] [200] [0.002s] [4.0B] [7b0692bd-326c-4c4c-91b2-fb3c87e62777] /api/prometheus_receiver
Jan 26 10:16:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:38.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:39 compute-0 nova_compute[254880]: 2026-01-26 10:16:39.170 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:39 compute-0 sshd-session[273801]: Invalid user zabbix from 157.245.76.178 port 36134
Jan 26 10:16:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:39 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:39.486 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:16:39 compute-0 sshd-session[273801]: Connection closed by invalid user zabbix 157.245.76.178 port 36134 [preauth]
Jan 26 10:16:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Jan 26 10:16:39 compute-0 nova_compute[254880]: 2026-01-26 10:16:39.967 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:40.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:41.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.268 254884 DEBUG nova.network.neutron [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.442 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Releasing lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.443 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.443 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.443 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.443 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.444 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.444 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.444 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 10:16:41 compute-0 ceph-mon[74456]: pgmap v1034: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Jan 26 10:16:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 120 KiB/s wr, 95 op/s
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.977 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:41 compute-0 nova_compute[254880]: 2026-01-26 10:16:41.978 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:42 compute-0 ceph-mon[74456]: pgmap v1035: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 120 KiB/s wr, 95 op/s
Jan 26 10:16:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3699931200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:42.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:42 compute-0 nova_compute[254880]: 2026-01-26 10:16:42.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:42 compute-0 nova_compute[254880]: 2026-01-26 10:16:42.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 10:16:43 compute-0 nova_compute[254880]: 2026-01-26 10:16:43.028 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 10:16:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:43.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 120 KiB/s wr, 95 op/s
Jan 26 10:16:44 compute-0 nova_compute[254880]: 2026-01-26 10:16:44.173 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:44.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:45 compute-0 nova_compute[254880]: 2026-01-26 10:16:45.014 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:45 compute-0 ceph-mon[74456]: pgmap v1036: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 120 KiB/s wr, 95 op/s
Jan 26 10:16:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3339811741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:16:45 compute-0 sudo[273809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:16:45 compute-0 sudo[273809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:45 compute-0 sudo[273809]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:45.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:45 compute-0 sudo[273834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:16:45 compute-0 sudo[273834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:45 compute-0 sudo[273857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:16:45 compute-0 sudo[273857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:45 compute-0 sudo[273857]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:45 compute-0 sudo[273834]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:16:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:16:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:16:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:16:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 120 KiB/s wr, 96 op/s
Jan 26 10:16:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:16:45 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:16:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:16:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:16:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:16:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:16:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:16:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:16:46 compute-0 sudo[273917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:16:46 compute-0 sudo[273917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:46 compute-0 sudo[273917]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:46 compute-0 sudo[273942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:16:46 compute-0 sudo[273942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:16:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:16:46 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:46 compute-0 podman[274009]: 2026-01-26 10:16:46.599620419 +0000 UTC m=+0.042467589 container create 59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hamilton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 26 10:16:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:46] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 26 10:16:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:46] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 26 10:16:46 compute-0 systemd[1]: Started libpod-conmon-59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b.scope.
Jan 26 10:16:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:16:46 compute-0 podman[274009]: 2026-01-26 10:16:46.58318391 +0000 UTC m=+0.026031100 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:16:46 compute-0 podman[274009]: 2026-01-26 10:16:46.691624167 +0000 UTC m=+0.134471357 container init 59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hamilton, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 10:16:46 compute-0 podman[274009]: 2026-01-26 10:16:46.698128167 +0000 UTC m=+0.140975337 container start 59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hamilton, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:16:46 compute-0 podman[274009]: 2026-01-26 10:16:46.70129935 +0000 UTC m=+0.144146540 container attach 59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 10:16:46 compute-0 tender_hamilton[274025]: 167 167
Jan 26 10:16:46 compute-0 systemd[1]: libpod-59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b.scope: Deactivated successfully.
Jan 26 10:16:46 compute-0 podman[274009]: 2026-01-26 10:16:46.704647988 +0000 UTC m=+0.147495168 container died 59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hamilton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c96bffafbd1b6908ce57d2eb215e95e9650fd4cff4dfba7b209b420890f0ee75-merged.mount: Deactivated successfully.
Jan 26 10:16:46 compute-0 podman[274009]: 2026-01-26 10:16:46.744938887 +0000 UTC m=+0.187786057 container remove 59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_hamilton, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 10:16:46 compute-0 systemd[1]: libpod-conmon-59b1256adc682954f32ce11213517e088463aa1aa3c72da9532126306dcdf66b.scope: Deactivated successfully.
Jan 26 10:16:46 compute-0 podman[274051]: 2026-01-26 10:16:46.920110775 +0000 UTC m=+0.044607414 container create 7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_shirley, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:16:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:46.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:46 compute-0 nova_compute[254880]: 2026-01-26 10:16:46.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:46 compute-0 nova_compute[254880]: 2026-01-26 10:16:46.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:16:46 compute-0 nova_compute[254880]: 2026-01-26 10:16:46.961 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:16:46 compute-0 systemd[1]: Started libpod-conmon-7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc.scope.
Jan 26 10:16:46 compute-0 podman[274051]: 2026-01-26 10:16:46.898699287 +0000 UTC m=+0.023195946 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:16:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02877dd420575efebbd24baa19ab9252e42c1b0112e72b8b51ec1b8be4debeeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02877dd420575efebbd24baa19ab9252e42c1b0112e72b8b51ec1b8be4debeeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02877dd420575efebbd24baa19ab9252e42c1b0112e72b8b51ec1b8be4debeeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02877dd420575efebbd24baa19ab9252e42c1b0112e72b8b51ec1b8be4debeeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02877dd420575efebbd24baa19ab9252e42c1b0112e72b8b51ec1b8be4debeeb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:47 compute-0 podman[274051]: 2026-01-26 10:16:47.037416433 +0000 UTC m=+0.161913142 container init 7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:16:47 compute-0 podman[274051]: 2026-01-26 10:16:47.051547002 +0000 UTC m=+0.176043651 container start 7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:16:47 compute-0 podman[274051]: 2026-01-26 10:16:47.055544946 +0000 UTC m=+0.180041595 container attach 7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:16:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:47.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:16:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:47.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:47 compute-0 ceph-mon[74456]: pgmap v1037: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 120 KiB/s wr, 96 op/s
Jan 26 10:16:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:16:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:16:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:16:47 compute-0 stoic_shirley[274068]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:16:47 compute-0 stoic_shirley[274068]: --> All data devices are unavailable
Jan 26 10:16:47 compute-0 systemd[1]: libpod-7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc.scope: Deactivated successfully.
Jan 26 10:16:47 compute-0 podman[274051]: 2026-01-26 10:16:47.389143934 +0000 UTC m=+0.513640603 container died 7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_shirley, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-02877dd420575efebbd24baa19ab9252e42c1b0112e72b8b51ec1b8be4debeeb-merged.mount: Deactivated successfully.
Jan 26 10:16:47 compute-0 podman[274051]: 2026-01-26 10:16:47.439678352 +0000 UTC m=+0.564174991 container remove 7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_shirley, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:16:47 compute-0 systemd[1]: libpod-conmon-7bb6ffcc89ab8bff0427801bd580873f5aade41d14750a2e1ad1e6b14dacc1fc.scope: Deactivated successfully.
Jan 26 10:16:47 compute-0 sudo[273942]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:47 compute-0 sudo[274097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:16:47 compute-0 sudo[274097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:47 compute-0 sudo[274097]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:47 compute-0 sudo[274122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:16:47 compute-0 sudo[274122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Jan 26 10:16:47 compute-0 podman[274191]: 2026-01-26 10:16:47.987942606 +0000 UTC m=+0.045183388 container create 819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:16:48 compute-0 systemd[1]: Started libpod-conmon-819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412.scope.
Jan 26 10:16:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:16:48 compute-0 podman[274191]: 2026-01-26 10:16:47.96772172 +0000 UTC m=+0.024962552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:16:48 compute-0 podman[274191]: 2026-01-26 10:16:48.068839056 +0000 UTC m=+0.126079838 container init 819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 10:16:48 compute-0 podman[274191]: 2026-01-26 10:16:48.075685075 +0000 UTC m=+0.132925857 container start 819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:16:48 compute-0 podman[274191]: 2026-01-26 10:16:48.080024918 +0000 UTC m=+0.137265770 container attach 819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 10:16:48 compute-0 systemd[1]: libpod-819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412.scope: Deactivated successfully.
Jan 26 10:16:48 compute-0 cool_driscoll[274207]: 167 167
Jan 26 10:16:48 compute-0 conmon[274207]: conmon 819759425b8677e86136 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412.scope/container/memory.events
Jan 26 10:16:48 compute-0 podman[274191]: 2026-01-26 10:16:48.082235315 +0000 UTC m=+0.139476127 container died 819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 26 10:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-89e9a9dda91e07ff20ba4117405146bbc141d784cca8589bac24faac6c5ce619-merged.mount: Deactivated successfully.
Jan 26 10:16:48 compute-0 podman[274191]: 2026-01-26 10:16:48.122099255 +0000 UTC m=+0.179340047 container remove 819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 10:16:48 compute-0 systemd[1]: libpod-conmon-819759425b8677e861364c603842eefe517b3b10b7ed99efc2ad11e572b75412.scope: Deactivated successfully.
Jan 26 10:16:48 compute-0 podman[274229]: 2026-01-26 10:16:48.280377351 +0000 UTC m=+0.040162958 container create 78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 10:16:48 compute-0 systemd[1]: Started libpod-conmon-78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd.scope.
Jan 26 10:16:48 compute-0 ceph-mon[74456]: pgmap v1038: 353 pgs: 353 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Jan 26 10:16:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6992dd086873322cdb04b66e936c4193b46df5a1c0044dacad993929a9bdd8dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6992dd086873322cdb04b66e936c4193b46df5a1c0044dacad993929a9bdd8dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6992dd086873322cdb04b66e936c4193b46df5a1c0044dacad993929a9bdd8dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6992dd086873322cdb04b66e936c4193b46df5a1c0044dacad993929a9bdd8dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:48 compute-0 podman[274229]: 2026-01-26 10:16:48.350804887 +0000 UTC m=+0.110590504 container init 78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:16:48 compute-0 podman[274229]: 2026-01-26 10:16:48.263712997 +0000 UTC m=+0.023498634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:16:48 compute-0 podman[274229]: 2026-01-26 10:16:48.358207701 +0000 UTC m=+0.117993298 container start 78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 10:16:48 compute-0 podman[274229]: 2026-01-26 10:16:48.361697901 +0000 UTC m=+0.121483528 container attach 78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_dirac, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]: {
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:     "0": [
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:         {
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "devices": [
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "/dev/loop3"
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             ],
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "lv_name": "ceph_lv0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "lv_size": "21470642176",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "name": "ceph_lv0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "tags": {
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.cluster_name": "ceph",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.crush_device_class": "",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.encrypted": "0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.osd_id": "0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.type": "block",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.vdo": "0",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:                 "ceph.with_tpm": "0"
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             },
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "type": "block",
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:             "vg_name": "ceph_vg0"
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:         }
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]:     ]
Jan 26 10:16:48 compute-0 pedantic_dirac[274245]: }
Jan 26 10:16:48 compute-0 systemd[1]: libpod-78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd.scope: Deactivated successfully.
Jan 26 10:16:48 compute-0 podman[274229]: 2026-01-26 10:16:48.671099558 +0000 UTC m=+0.430885165 container died 78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Jan 26 10:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6992dd086873322cdb04b66e936c4193b46df5a1c0044dacad993929a9bdd8dd-merged.mount: Deactivated successfully.
Jan 26 10:16:48 compute-0 podman[274229]: 2026-01-26 10:16:48.712491878 +0000 UTC m=+0.472277485 container remove 78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:16:48 compute-0 systemd[1]: libpod-conmon-78968800c49c9a9dda15a5453cc5cbd68c3d5db0669fa99688157403154f5afd.scope: Deactivated successfully.
Jan 26 10:16:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:16:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:48 compute-0 sudo[274122]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:16:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:16:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:16:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:16:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:16:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:16:48 compute-0 sudo[274268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:16:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:48.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:16:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:48.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:16:48 compute-0 sudo[274268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:48 compute-0 sudo[274268]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:48 compute-0 sudo[274293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:16:48 compute-0 sudo[274293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:16:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:48.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:16:49 compute-0 nova_compute[254880]: 2026-01-26 10:16:49.175 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:16:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:16:49 compute-0 podman[274359]: 2026-01-26 10:16:49.298222519 +0000 UTC m=+0.037871708 container create 28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 10:16:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:16:49 compute-0 systemd[1]: Started libpod-conmon-28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16.scope.
Jan 26 10:16:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:16:49 compute-0 podman[274359]: 2026-01-26 10:16:49.282037217 +0000 UTC m=+0.021686426 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:16:49 compute-0 podman[274359]: 2026-01-26 10:16:49.383740479 +0000 UTC m=+0.123389758 container init 28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 10:16:49 compute-0 podman[274359]: 2026-01-26 10:16:49.392073556 +0000 UTC m=+0.131722755 container start 28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_carver, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:16:49 compute-0 podman[274359]: 2026-01-26 10:16:49.395614478 +0000 UTC m=+0.135263667 container attach 28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_carver, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:16:49 compute-0 funny_carver[274375]: 167 167
Jan 26 10:16:49 compute-0 systemd[1]: libpod-28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16.scope: Deactivated successfully.
Jan 26 10:16:49 compute-0 podman[274359]: 2026-01-26 10:16:49.399272443 +0000 UTC m=+0.138921662 container died 28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3367e80bdc0e67bb81e5c9362102ec942fa97609fd1c68b134ac0c5e351a2f3-merged.mount: Deactivated successfully.
Jan 26 10:16:49 compute-0 podman[274359]: 2026-01-26 10:16:49.448356193 +0000 UTC m=+0.188005412 container remove 28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 10:16:49 compute-0 systemd[1]: libpod-conmon-28e15a4b9d2d400bef0c76564442dd1313355707d1752fb3cb07ad0eda5dfb16.scope: Deactivated successfully.
Jan 26 10:16:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:49 compute-0 podman[274398]: 2026-01-26 10:16:49.625950544 +0000 UTC m=+0.049497361 container create 68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:16:49 compute-0 systemd[1]: Started libpod-conmon-68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83.scope.
Jan 26 10:16:49 compute-0 podman[274398]: 2026-01-26 10:16:49.609256678 +0000 UTC m=+0.032803525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:16:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b2aba5c4b99e3ad5cba6ce3c29043de3956f8dd86f24b35dc7d54ef3d85529/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b2aba5c4b99e3ad5cba6ce3c29043de3956f8dd86f24b35dc7d54ef3d85529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b2aba5c4b99e3ad5cba6ce3c29043de3956f8dd86f24b35dc7d54ef3d85529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b2aba5c4b99e3ad5cba6ce3c29043de3956f8dd86f24b35dc7d54ef3d85529/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:16:49 compute-0 podman[274398]: 2026-01-26 10:16:49.733695433 +0000 UTC m=+0.157242270 container init 68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bose, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:16:49 compute-0 podman[274398]: 2026-01-26 10:16:49.752610926 +0000 UTC m=+0.176157783 container start 68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 10:16:49 compute-0 podman[274398]: 2026-01-26 10:16:49.757696099 +0000 UTC m=+0.181242936 container attach 68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:16:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 26 10:16:50 compute-0 nova_compute[254880]: 2026-01-26 10:16:50.059 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:50 compute-0 ceph-mon[74456]: pgmap v1039: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 26 10:16:50 compute-0 lvm[274489]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:16:50 compute-0 lvm[274489]: VG ceph_vg0 finished
Jan 26 10:16:50 compute-0 lucid_bose[274414]: {}
Jan 26 10:16:50 compute-0 systemd[1]: libpod-68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83.scope: Deactivated successfully.
Jan 26 10:16:50 compute-0 systemd[1]: libpod-68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83.scope: Consumed 1.102s CPU time.
Jan 26 10:16:50 compute-0 podman[274495]: 2026-01-26 10:16:50.505414273 +0000 UTC m=+0.029200712 container died 68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-69b2aba5c4b99e3ad5cba6ce3c29043de3956f8dd86f24b35dc7d54ef3d85529-merged.mount: Deactivated successfully.
Jan 26 10:16:50 compute-0 podman[274495]: 2026-01-26 10:16:50.545490189 +0000 UTC m=+0.069276618 container remove 68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:16:50 compute-0 systemd[1]: libpod-conmon-68d0e65b7f52e937904a33246517868f9d2df25b7045341e12dcf76aaf48fc83.scope: Deactivated successfully.
Jan 26 10:16:50 compute-0 sudo[274293]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:16:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:16:50 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:50 compute-0 sudo[274510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:16:50 compute-0 sudo[274510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:16:50 compute-0 sudo[274510]: pam_unix(sudo:session): session closed for user root
Jan 26 10:16:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:50.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:51 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:51 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:16:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 10:16:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:52 compute-0 ceph-mon[74456]: pgmap v1040: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 10:16:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:52.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:53 compute-0 podman[274538]: 2026-01-26 10:16:53.128110454 +0000 UTC m=+0.063150258 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 26 10:16:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:53.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 10:16:54 compute-0 nova_compute[254880]: 2026-01-26 10:16:54.176 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:54.702 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:16:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:54.702 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:16:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:16:54.703 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:16:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:54.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:54 compute-0 ceph-mon[74456]: pgmap v1041: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 10:16:55 compute-0 nova_compute[254880]: 2026-01-26 10:16:55.061 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:16:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:55.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:16:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:16:56 compute-0 ceph-mon[74456]: pgmap v1042: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:16:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:56] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Jan 26 10:16:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:16:56] "GET /metrics HTTP/1.1" 200 48553 "" "Prometheus/2.51.0"
Jan 26 10:16:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:56.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:16:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:16:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:16:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:16:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:16:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:57.180Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:16:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:57.180Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:16:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:57.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:16:58 compute-0 nova_compute[254880]: 2026-01-26 10:16:58.415 254884 INFO nova.compute.manager [None req-8b81c557-cac4-4e43-94ac-cb077d6dc86a c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Get console output
Jan 26 10:16:58 compute-0 nova_compute[254880]: 2026-01-26 10:16:58.421 268147 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 10:16:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:58.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:16:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:16:58.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:16:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:16:58.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:59 compute-0 ceph-mon[74456]: pgmap v1043: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:16:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/653039017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:16:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/653039017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:16:59 compute-0 nova_compute[254880]: 2026-01-26 10:16:59.177 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:16:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:16:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:16:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:16:59.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:16:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:16:59 compute-0 nova_compute[254880]: 2026-01-26 10:16:59.624 254884 DEBUG nova.compute.manager [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:16:59 compute-0 nova_compute[254880]: 2026-01-26 10:16:59.624 254884 DEBUG nova.compute.manager [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing instance network info cache due to event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:16:59 compute-0 nova_compute[254880]: 2026-01-26 10:16:59.625 254884 DEBUG oslo_concurrency.lockutils [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:16:59 compute-0 nova_compute[254880]: 2026-01-26 10:16:59.625 254884 DEBUG oslo_concurrency.lockutils [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:16:59 compute-0 nova_compute[254880]: 2026-01-26 10:16:59.625 254884 DEBUG nova.network.neutron [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:16:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 10:17:00 compute-0 nova_compute[254880]: 2026-01-26 10:17:00.063 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:00 compute-0 ceph-mon[74456]: pgmap v1044: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 10:17:00 compute-0 nova_compute[254880]: 2026-01-26 10:17:00.641 254884 INFO nova.compute.manager [None req-052b4995-b985-4b3c-804b-eeb8bd54d10b c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Get console output
Jan 26 10:17:00 compute-0 nova_compute[254880]: 2026-01-26 10:17:00.648 268147 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 10:17:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:00.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:00 compute-0 nova_compute[254880]: 2026-01-26 10:17:00.951 254884 DEBUG nova.network.neutron [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updated VIF entry in instance network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:17:00 compute-0 nova_compute[254880]: 2026-01-26 10:17:00.952 254884 DEBUG nova.network.neutron [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:00 compute-0 nova_compute[254880]: 2026-01-26 10:17:00.970 254884 DEBUG oslo_concurrency.lockutils [req-14c9448f-b5d5-447e-8cb5-3ecbce077f50 req-153540cd-69a2-4439-b2f2-8dff710287a4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:17:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:01.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.733 254884 DEBUG nova.compute.manager [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-unplugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.734 254884 DEBUG oslo_concurrency.lockutils [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.735 254884 DEBUG oslo_concurrency.lockutils [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.735 254884 DEBUG oslo_concurrency.lockutils [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.735 254884 DEBUG nova.compute.manager [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] No waiting events found dispatching network-vif-unplugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.735 254884 WARNING nova.compute.manager [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received unexpected event network-vif-unplugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 for instance with vm_state active and task_state None.
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.736 254884 DEBUG nova.compute.manager [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.736 254884 DEBUG oslo_concurrency.lockutils [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.736 254884 DEBUG oslo_concurrency.lockutils [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.736 254884 DEBUG oslo_concurrency.lockutils [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.736 254884 DEBUG nova.compute.manager [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] No waiting events found dispatching network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:17:01 compute-0 nova_compute[254880]: 2026-01-26 10:17:01.737 254884 WARNING nova.compute.manager [req-5f97e5a2-ab61-450e-b5ce-a54a52f46787 req-73d839b4-7f37-443c-87e0-7d89db9843be b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received unexpected event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 for instance with vm_state active and task_state None.
Jan 26 10:17:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 26 10:17:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:02.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:03 compute-0 ceph-mon[74456]: pgmap v1045: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 26 10:17:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:03.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.318 254884 DEBUG nova.compute.manager [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.319 254884 DEBUG nova.compute.manager [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing instance network info cache due to event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.319 254884 DEBUG oslo_concurrency.lockutils [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.319 254884 DEBUG oslo_concurrency.lockutils [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.320 254884 DEBUG nova.network.neutron [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.548 254884 INFO nova.compute.manager [None req-b8aa48f6-4daf-4462-b3fd-064a31c93df0 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Get console output
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.552 268147 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 10:17:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:17:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.833 254884 DEBUG nova.compute.manager [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.833 254884 DEBUG oslo_concurrency.lockutils [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.833 254884 DEBUG oslo_concurrency.lockutils [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.834 254884 DEBUG oslo_concurrency.lockutils [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.834 254884 DEBUG nova.compute.manager [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] No waiting events found dispatching network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.834 254884 WARNING nova.compute.manager [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received unexpected event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 for instance with vm_state active and task_state None.
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.834 254884 DEBUG nova.compute.manager [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.835 254884 DEBUG oslo_concurrency.lockutils [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.835 254884 DEBUG oslo_concurrency.lockutils [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.835 254884 DEBUG oslo_concurrency.lockutils [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.835 254884 DEBUG nova.compute.manager [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] No waiting events found dispatching network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:17:03 compute-0 nova_compute[254880]: 2026-01-26 10:17:03.835 254884 WARNING nova.compute.manager [req-6dc809d3-9c49-4f85-970a-ac9fe84aaa9c req-ebe501b6-3098-4012-a13c-55775c5b3ecd b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received unexpected event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 for instance with vm_state active and task_state None.
Jan 26 10:17:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 26 10:17:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:04 compute-0 nova_compute[254880]: 2026-01-26 10:17:04.179 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:04 compute-0 nova_compute[254880]: 2026-01-26 10:17:04.836 254884 DEBUG nova.network.neutron [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updated VIF entry in instance network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:17:04 compute-0 nova_compute[254880]: 2026-01-26 10:17:04.837 254884 DEBUG nova.network.neutron [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:04 compute-0 nova_compute[254880]: 2026-01-26 10:17:04.852 254884 DEBUG oslo_concurrency.lockutils [req-d2c2b026-72c1-4445-906e-89183785c458 req-f5dda0a9-d76a-46f6-9345-7f01b9de8f80 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:17:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:05 compute-0 nova_compute[254880]: 2026-01-26 10:17:05.065 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:05 compute-0 ceph-mon[74456]: pgmap v1046: 353 pgs: 353 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 26 10:17:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:05.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:05 compute-0 sudo[274569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:17:05 compute-0 sudo[274569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:05 compute-0 sudo[274569]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 19 KiB/s wr, 3 op/s
Jan 26 10:17:06 compute-0 ceph-mon[74456]: pgmap v1047: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 19 KiB/s wr, 3 op/s
Jan 26 10:17:06 compute-0 podman[274594]: 2026-01-26 10:17:06.164960811 +0000 UTC m=+0.093987322 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 26 10:17:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:06] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Jan 26 10:17:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:06] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Jan 26 10:17:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:06.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:07.181Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:17:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:07.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:17:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:07.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:17:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 6.7 KiB/s wr, 2 op/s
Jan 26 10:17:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:08.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:08.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:09 compute-0 nova_compute[254880]: 2026-01-26 10:17:09.181 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:09 compute-0 ceph-mon[74456]: pgmap v1048: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 6.7 KiB/s wr, 2 op/s
Jan 26 10:17:09 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1907053133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:09.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 8.5 KiB/s wr, 30 op/s
Jan 26 10:17:10 compute-0 nova_compute[254880]: 2026-01-26 10:17:10.067 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:10 compute-0 ceph-mon[74456]: pgmap v1049: 353 pgs: 353 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 8.5 KiB/s wr, 30 op/s
Jan 26 10:17:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:10.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:17:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:11.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:17:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.5 KiB/s wr, 29 op/s
Jan 26 10:17:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:12 compute-0 ceph-mon[74456]: pgmap v1050: 353 pgs: 353 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.5 KiB/s wr, 29 op/s
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.278 254884 DEBUG nova.compute.manager [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.279 254884 DEBUG nova.compute.manager [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing instance network info cache due to event network-changed-1cdd4fc2-81a5-488e-820c-586ca6c12d57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.280 254884 DEBUG oslo_concurrency.lockutils [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.280 254884 DEBUG oslo_concurrency.lockutils [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.280 254884 DEBUG nova.network.neutron [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Refreshing network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.485 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.485 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.486 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.486 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.486 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.488 254884 INFO nova.compute.manager [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Terminating instance
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.489 254884 DEBUG nova.compute.manager [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 10:17:12 compute-0 kernel: tap1cdd4fc2-81 (unregistering): left promiscuous mode
Jan 26 10:17:12 compute-0 NetworkManager[48970]: <info>  [1769422632.5507] device (tap1cdd4fc2-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 10:17:12 compute-0 ovn_controller[155832]: 2026-01-26T10:17:12Z|00074|binding|INFO|Releasing lport 1cdd4fc2-81a5-488e-820c-586ca6c12d57 from this chassis (sb_readonly=0)
Jan 26 10:17:12 compute-0 ovn_controller[155832]: 2026-01-26T10:17:12Z|00075|binding|INFO|Setting lport 1cdd4fc2-81a5-488e-820c-586ca6c12d57 down in Southbound
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.564 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:12 compute-0 ovn_controller[155832]: 2026-01-26T10:17:12Z|00076|binding|INFO|Removing iface tap1cdd4fc2-81 ovn-installed in OVS
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.566 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.585 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:12 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 26 10:17:12 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 14.940s CPU time.
Jan 26 10:17:12 compute-0 systemd-machined[221254]: Machine qemu-4-instance-0000000b terminated.
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.714 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.719 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.730 254884 INFO nova.virt.libvirt.driver [-] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Instance destroyed successfully.
Jan 26 10:17:12 compute-0 nova_compute[254880]: 2026-01-26 10:17:12.731 254884 DEBUG nova.objects.instance [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'resources' on Instance uuid 95d9d3cd-1887-4125-b0e7-2252b73dbe82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:17:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:12.850 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:fb:ae 10.100.0.11'], port_security=['fa:16:3e:c2:fb:ae 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '95d9d3cd-1887-4125-b0e7-2252b73dbe82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e066970b-3668-485e-a8ee-d7788d42c06f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eda71-392f-4d4b-8724-78530674037e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=1cdd4fc2-81a5-488e-820c-586ca6c12d57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:17:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:12.852 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 1cdd4fc2-81a5-488e-820c-586ca6c12d57 in datapath 9bff64e0-694f-4b2d-b4b5-5e3b1d94460e unbound from our chassis
Jan 26 10:17:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:12.854 166625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bff64e0-694f-4b2d-b4b5-5e3b1d94460e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 10:17:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:12.855 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8fdb47-183a-49f0-a1ce-3ca67cf1d1f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:12.856 166625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e namespace which is not needed anymore
Jan 26 10:17:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:12.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:12 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [NOTICE]   (273652) : haproxy version is 2.8.14-c23fe91
Jan 26 10:17:12 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [NOTICE]   (273652) : path to executable is /usr/sbin/haproxy
Jan 26 10:17:12 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [WARNING]  (273652) : Exiting Master process...
Jan 26 10:17:12 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [WARNING]  (273652) : Exiting Master process...
Jan 26 10:17:12 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [ALERT]    (273652) : Current worker (273654) exited with code 143 (Terminated)
Jan 26 10:17:12 compute-0 neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e[273648]: [WARNING]  (273652) : All workers exited. Exiting... (0)
Jan 26 10:17:12 compute-0 systemd[1]: libpod-ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094.scope: Deactivated successfully.
Jan 26 10:17:12 compute-0 podman[274662]: 2026-01-26 10:17:12.996377915 +0000 UTC m=+0.048298999 container died ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094-userdata-shm.mount: Deactivated successfully.
Jan 26 10:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c388cb48ee58c27ce7c8a92350d0710e5829f3a72e1cf571ec646bef6410ec1-merged.mount: Deactivated successfully.
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.025 254884 DEBUG nova.virt.libvirt.vif [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1423011292',display_name='tempest-TestNetworkBasicOps-server-1423011292',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1423011292',id=11,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKs6r9vrmOLoMp7qP9DSziD19MyulJ4WxkGq32T5oMGA9YlFhrc8KR+CrRlK7gHjttZpWpF8q1BhU3cfWPT3YhBD4pYVF8/xqjrmceUzbapOQ0G+qVqOkZvdNryYHhvSMg==',key_name='tempest-TestNetworkBasicOps-1227421461',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:16:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-y2l003zy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:16:17Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=95d9d3cd-1887-4125-b0e7-2252b73dbe82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.026 254884 DEBUG nova.network.os_vif_util [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.027 254884 DEBUG nova.network.os_vif_util [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:fb:ae,bridge_name='br-int',has_traffic_filtering=True,id=1cdd4fc2-81a5-488e-820c-586ca6c12d57,network=Network(9bff64e0-694f-4b2d-b4b5-5e3b1d94460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cdd4fc2-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.027 254884 DEBUG os_vif [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:fb:ae,bridge_name='br-int',has_traffic_filtering=True,id=1cdd4fc2-81a5-488e-820c-586ca6c12d57,network=Network(9bff64e0-694f-4b2d-b4b5-5e3b1d94460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cdd4fc2-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.029 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.029 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cdd4fc2-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.031 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.032 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.034 254884 INFO os_vif [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:fb:ae,bridge_name='br-int',has_traffic_filtering=True,id=1cdd4fc2-81a5-488e-820c-586ca6c12d57,network=Network(9bff64e0-694f-4b2d-b4b5-5e3b1d94460e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cdd4fc2-81')
Jan 26 10:17:13 compute-0 podman[274662]: 2026-01-26 10:17:13.042054277 +0000 UTC m=+0.093975361 container cleanup ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 10:17:13 compute-0 systemd[1]: libpod-conmon-ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094.scope: Deactivated successfully.
Jan 26 10:17:13 compute-0 podman[274706]: 2026-01-26 10:17:13.111697163 +0000 UTC m=+0.044958003 container remove ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.117 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[3d03fbcd-af31-410f-b423-77c54508b8d2]: (4, ('Mon Jan 26 10:17:12 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e (ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094)\nac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094\nMon Jan 26 10:17:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e (ac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094)\nac229fe03db75ce90142ab238aae2b4737bff24eada84dac131209cd0a75a094\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.119 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[d92f1325-9c8a-4fbb-a24f-e67b291508b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.120 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bff64e0-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.167 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:13 compute-0 kernel: tap9bff64e0-60: left promiscuous mode
Jan 26 10:17:13 compute-0 nova_compute[254880]: 2026-01-26 10:17:13.181 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.185 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[033e33cf-810a-49e6-8d10-1cb12ceb65d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.203 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[8452e4fe-79d3-4fa3-9fab-3e114b2a09b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.205 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[7009867a-3b89-49f4-b419-8c5614c4efe2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.222 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[15a89618-28d1-45b3-8e96-824b5a459893]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453020, 'reachable_time': 15311, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274724, 'error': None, 'target': 'ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bff64e0\x2d694f\x2d4b2d\x2db4b5\x2d5e3b1d94460e.mount: Deactivated successfully.
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.225 167020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bff64e0-694f-4b2d-b4b5-5e3b1d94460e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 10:17:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:13.225 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[578f3c3f-2e03-4e16-95af-6096abaeef28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:13.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.5 KiB/s wr, 29 op/s
Jan 26 10:17:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:14 compute-0 ceph-mon[74456]: pgmap v1051: 353 pgs: 353 active+clean; 121 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 8.5 KiB/s wr, 29 op/s
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.650 254884 DEBUG nova.compute.manager [req-21fb715c-2cef-4c3a-9a9a-25c1e91ea8f8 req-c72897ef-c3de-4acc-8f9f-a5aeb50fd545 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-unplugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.650 254884 DEBUG oslo_concurrency.lockutils [req-21fb715c-2cef-4c3a-9a9a-25c1e91ea8f8 req-c72897ef-c3de-4acc-8f9f-a5aeb50fd545 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.650 254884 DEBUG oslo_concurrency.lockutils [req-21fb715c-2cef-4c3a-9a9a-25c1e91ea8f8 req-c72897ef-c3de-4acc-8f9f-a5aeb50fd545 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.650 254884 DEBUG oslo_concurrency.lockutils [req-21fb715c-2cef-4c3a-9a9a-25c1e91ea8f8 req-c72897ef-c3de-4acc-8f9f-a5aeb50fd545 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.651 254884 DEBUG nova.compute.manager [req-21fb715c-2cef-4c3a-9a9a-25c1e91ea8f8 req-c72897ef-c3de-4acc-8f9f-a5aeb50fd545 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] No waiting events found dispatching network-vif-unplugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.651 254884 DEBUG nova.compute.manager [req-21fb715c-2cef-4c3a-9a9a-25c1e91ea8f8 req-c72897ef-c3de-4acc-8f9f-a5aeb50fd545 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-unplugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.852 254884 DEBUG nova.network.neutron [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updated VIF entry in instance network info cache for port 1cdd4fc2-81a5-488e-820c-586ca6c12d57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:17:14 compute-0 nova_compute[254880]: 2026-01-26 10:17:14.853 254884 DEBUG nova.network.neutron [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [{"id": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "address": "fa:16:3e:c2:fb:ae", "network": {"id": "9bff64e0-694f-4b2d-b4b5-5e3b1d94460e", "bridge": "br-int", "label": "tempest-network-smoke--2141113135", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cdd4fc2-81", "ovs_interfaceid": "1cdd4fc2-81a5-488e-820c-586ca6c12d57", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:14.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:15 compute-0 nova_compute[254880]: 2026-01-26 10:17:15.068 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:15.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:15 compute-0 nova_compute[254880]: 2026-01-26 10:17:15.253 254884 DEBUG oslo_concurrency.lockutils [req-ecfae113-4f30-4457-a1cf-881fa7bec97d req-2cf4add3-8a08-49cf-a212-90f0dd26f481 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-95d9d3cd-1887-4125-b0e7-2252b73dbe82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:17:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 9.5 KiB/s wr, 48 op/s
Jan 26 10:17:16 compute-0 ceph-mon[74456]: pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 9.5 KiB/s wr, 48 op/s
Jan 26 10:17:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:16] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Jan 26 10:17:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:16] "GET /metrics HTTP/1.1" 200 48549 "" "Prometheus/2.51.0"
Jan 26 10:17:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:16.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.106 254884 DEBUG nova.compute.manager [req-84b8936c-9f55-4174-9ee2-00a2c3c8c07d req-ea5b01ab-b28f-4412-b7ec-5bdeb2ffc0ce b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.106 254884 DEBUG oslo_concurrency.lockutils [req-84b8936c-9f55-4174-9ee2-00a2c3c8c07d req-ea5b01ab-b28f-4412-b7ec-5bdeb2ffc0ce b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.107 254884 DEBUG oslo_concurrency.lockutils [req-84b8936c-9f55-4174-9ee2-00a2c3c8c07d req-ea5b01ab-b28f-4412-b7ec-5bdeb2ffc0ce b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.107 254884 DEBUG oslo_concurrency.lockutils [req-84b8936c-9f55-4174-9ee2-00a2c3c8c07d req-ea5b01ab-b28f-4412-b7ec-5bdeb2ffc0ce b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.107 254884 DEBUG nova.compute.manager [req-84b8936c-9f55-4174-9ee2-00a2c3c8c07d req-ea5b01ab-b28f-4412-b7ec-5bdeb2ffc0ce b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] No waiting events found dispatching network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.107 254884 WARNING nova.compute.manager [req-84b8936c-9f55-4174-9ee2-00a2c3c8c07d req-ea5b01ab-b28f-4412-b7ec-5bdeb2ffc0ce b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received unexpected event network-vif-plugged-1cdd4fc2-81a5-488e-820c-586ca6c12d57 for instance with vm_state active and task_state deleting.
Jan 26 10:17:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:17.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:17.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.352 254884 INFO nova.virt.libvirt.driver [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Deleting instance files /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82_del
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.352 254884 INFO nova.virt.libvirt.driver [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Deletion of /var/lib/nova/instances/95d9d3cd-1887-4125-b0e7-2252b73dbe82_del complete
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.494 254884 INFO nova.compute.manager [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Took 5.00 seconds to destroy the instance on the hypervisor.
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.495 254884 DEBUG oslo.service.loopingcall [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.495 254884 DEBUG nova.compute.manager [-] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 10:17:17 compute-0 nova_compute[254880]: 2026-01-26 10:17:17.495 254884 DEBUG nova.network.neutron [-] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 10:17:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 KiB/s wr, 46 op/s
Jan 26 10:17:18 compute-0 nova_compute[254880]: 2026-01-26 10:17:18.032 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:18 compute-0 ceph-mon[74456]: pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 KiB/s wr, 46 op/s
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:17:18
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'volumes', 'vms', 'images', 'default.rgw.log']
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:17:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:17:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:17:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:17:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:18.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:18.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:17:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:19.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.510 254884 DEBUG nova.network.neutron [-] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.652 254884 DEBUG nova.compute.manager [req-2dadc977-8c7f-495e-820c-e7ac0353dc84 req-bac21bf5-5b4b-4850-9062-c90fdb5dda93 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Received event network-vif-deleted-1cdd4fc2-81a5-488e-820c-586ca6c12d57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.653 254884 INFO nova.compute.manager [req-2dadc977-8c7f-495e-820c-e7ac0353dc84 req-bac21bf5-5b4b-4850-9062-c90fdb5dda93 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Neutron deleted interface 1cdd4fc2-81a5-488e-820c-586ca6c12d57; detaching it from the instance and deleting it from the info cache
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.653 254884 DEBUG nova.network.neutron [req-2dadc977-8c7f-495e-820c-e7ac0353dc84 req-bac21bf5-5b4b-4850-9062-c90fdb5dda93 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.655 254884 INFO nova.compute.manager [-] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Took 2.16 seconds to deallocate network for instance.
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.747 254884 DEBUG nova.compute.manager [req-2dadc977-8c7f-495e-820c-e7ac0353dc84 req-bac21bf5-5b4b-4850-9062-c90fdb5dda93 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Detach interface failed, port_id=1cdd4fc2-81a5-488e-820c-586ca6c12d57, reason: Instance 95d9d3cd-1887-4125-b0e7-2252b73dbe82 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.865 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.865 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.4 KiB/s wr, 57 op/s
Jan 26 10:17:19 compute-0 nova_compute[254880]: 2026-01-26 10:17:19.925 254884 DEBUG oslo_concurrency.processutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:20 compute-0 nova_compute[254880]: 2026-01-26 10:17:20.070 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:17:20 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3993876434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:20 compute-0 nova_compute[254880]: 2026-01-26 10:17:20.390 254884 DEBUG oslo_concurrency.processutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:20 compute-0 nova_compute[254880]: 2026-01-26 10:17:20.396 254884 DEBUG nova.compute.provider_tree [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:17:20 compute-0 nova_compute[254880]: 2026-01-26 10:17:20.422 254884 DEBUG nova.scheduler.client.report [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:17:20 compute-0 nova_compute[254880]: 2026-01-26 10:17:20.451 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:20 compute-0 nova_compute[254880]: 2026-01-26 10:17:20.477 254884 INFO nova.scheduler.client.report [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Deleted allocations for instance 95d9d3cd-1887-4125-b0e7-2252b73dbe82
Jan 26 10:17:20 compute-0 nova_compute[254880]: 2026-01-26 10:17:20.535 254884 DEBUG oslo_concurrency.lockutils [None req-c248008c-6b5e-4f46-b0f8-4ee09cc97084 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "95d9d3cd-1887-4125-b0e7-2252b73dbe82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:20 compute-0 ceph-mon[74456]: pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.4 KiB/s wr, 57 op/s
Jan 26 10:17:20 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3993876434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:20 compute-0 sshd-session[274752]: Invalid user zabbix from 157.245.76.178 port 55390
Jan 26 10:17:20 compute-0 sshd-session[274752]: Connection closed by invalid user zabbix 157.245.76.178 port 55390 [preauth]
Jan 26 10:17:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:17:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:20.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:17:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:21.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 26 10:17:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:22 compute-0 ceph-mon[74456]: pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 26 10:17:23 compute-0 nova_compute[254880]: 2026-01-26 10:17:23.036 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:23 compute-0 nova_compute[254880]: 2026-01-26 10:17:23.739 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:23 compute-0 nova_compute[254880]: 2026-01-26 10:17:23.835 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 26 10:17:24 compute-0 podman[274761]: 2026-01-26 10:17:24.15705973 +0000 UTC m=+0.090923672 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 10:17:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:25 compute-0 nova_compute[254880]: 2026-01-26 10:17:25.072 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:25.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:25 compute-0 ceph-mon[74456]: pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 26 10:17:25 compute-0 sudo[274783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:17:25 compute-0 sudo[274783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:25 compute-0 sudo[274783]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 26 10:17:26 compute-0 ceph-mon[74456]: pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 26 10:17:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:26] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Jan 26 10:17:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:26] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Jan 26 10:17:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:17:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:17:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:27.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:27.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:27 compute-0 nova_compute[254880]: 2026-01-26 10:17:27.730 254884 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769422632.728609, 95d9d3cd-1887-4125-b0e7-2252b73dbe82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:17:27 compute-0 nova_compute[254880]: 2026-01-26 10:17:27.730 254884 INFO nova.compute.manager [-] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] VM Stopped (Lifecycle Event)
Jan 26 10:17:27 compute-0 nova_compute[254880]: 2026-01-26 10:17:27.758 254884 DEBUG nova.compute.manager [None req-9f776edf-4beb-41af-aa10-3b8e02805a75 - - - - - -] [instance: 95d9d3cd-1887-4125-b0e7-2252b73dbe82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:17:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 597 B/s wr, 11 op/s
Jan 26 10:17:28 compute-0 nova_compute[254880]: 2026-01-26 10:17:28.039 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:28.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:17:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:28.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:28.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:29 compute-0 ceph-mon[74456]: pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 597 B/s wr, 11 op/s
Jan 26 10:17:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:29.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 597 B/s wr, 11 op/s
Jan 26 10:17:30 compute-0 nova_compute[254880]: 2026-01-26 10:17:30.074 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:30 compute-0 ceph-mon[74456]: pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 597 B/s wr, 11 op/s
Jan 26 10:17:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:30.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:17:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:31.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:17:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:32 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:32.054 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:17:32 compute-0 nova_compute[254880]: 2026-01-26 10:17:32.055 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:32 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:32.055 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:17:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:32.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:32 compute-0 ceph-mon[74456]: pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:33 compute-0 nova_compute[254880]: 2026-01-26 10:17:33.071 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:33.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:17:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:35 compute-0 ceph-mon[74456]: pgmap v1061: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:35 compute-0 nova_compute[254880]: 2026-01-26 10:17:35.076 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:35.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:17:35 compute-0 nova_compute[254880]: 2026-01-26 10:17:35.975 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:36] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:17:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:36] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:17:36 compute-0 nova_compute[254880]: 2026-01-26 10:17:36.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:36 compute-0 nova_compute[254880]: 2026-01-26 10:17:36.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:36.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:37.185Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:17:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:37.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:37 compute-0 podman[274821]: 2026-01-26 10:17:37.217819134 +0000 UTC m=+0.135803870 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:17:37 compute-0 ceph-mon[74456]: pgmap v1062: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:17:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:37.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.073 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:38 compute-0 ceph-mon[74456]: pgmap v1063: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.346 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.346 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.346 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.347 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.347 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.463 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.463 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.479 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.547 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.548 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.554 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.555 254884 INFO nova.compute.claims [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Claim successful on node compute-0.ctlplane.example.com
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.640 254884 DEBUG nova.scheduler.client.report [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Refreshing inventories for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.666 254884 DEBUG nova.scheduler.client.report [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Updating ProviderTree inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.667 254884 DEBUG nova.compute.provider_tree [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.680 254884 DEBUG nova.scheduler.client.report [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Refreshing aggregate associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.717 254884 DEBUG nova.scheduler.client.report [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Refreshing trait associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.751 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:17:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534014619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.795 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:38.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:17:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:38.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:17:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.978 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.979 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4543MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:17:38 compute-0 nova_compute[254880]: 2026-01-26 10:17:38.979 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:17:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2622083868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.230 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.234 254884 DEBUG nova.compute.provider_tree [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:17:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.712 254884 DEBUG nova.scheduler.client.report [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:17:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/534014619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/249493324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2622083868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.858 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.859 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.861 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.990 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 10:17:39 compute-0 nova_compute[254880]: 2026-01-26 10:17:39.991 254884 DEBUG nova.network.neutron [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.010 254884 INFO nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.013 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Instance 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.014 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.014 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.034 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.078 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.082 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.348 254884 DEBUG nova.policy [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c1208d3e25b940ea93fe76884c7a53db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.443 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.445 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.445 254884 INFO nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Creating image(s)
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.480 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:17:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:17:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422539289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.510 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.537 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.542 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.564 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.570 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.606 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.607 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "d81880e926e175d0cc7241caa7cc18231a8a289c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.608 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.608 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "d81880e926e175d0cc7241caa7cc18231a8a289c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.633 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.636 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.653 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.709 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:17:40 compute-0 nova_compute[254880]: 2026-01-26 10:17:40.709 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:41 compute-0 ceph-mon[74456]: pgmap v1064: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:17:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3657476507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3422539289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:41.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.491 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d81880e926e175d0cc7241caa7cc18231a8a289c 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.569 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] resizing rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.616 254884 DEBUG nova.network.neutron [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Successfully created port: 6691b1fe-ff5a-4e6c-88fa-00ca95260dec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.718 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.719 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.719 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.720 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.725 254884 DEBUG nova.objects.instance [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.740 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.740 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.741 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.741 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Ensure instance console log exists: /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.742 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.742 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.742 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.743 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:41 compute-0 nova_compute[254880]: 2026-01-26 10:17:41.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:42 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:42.057 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:42 compute-0 nova_compute[254880]: 2026-01-26 10:17:42.450 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:42.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:43 compute-0 nova_compute[254880]: 2026-01-26 10:17:43.076 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:17:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:17:43 compute-0 ceph-mon[74456]: pgmap v1065: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:43 compute-0 nova_compute[254880]: 2026-01-26 10:17:43.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.339 254884 DEBUG nova.network.neutron [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Successfully updated port: 6691b1fe-ff5a-4e6c-88fa-00ca95260dec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.361 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.361 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquired lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.362 254884 DEBUG nova.network.neutron [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.460 254884 DEBUG nova.compute.manager [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-changed-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.460 254884 DEBUG nova.compute.manager [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Refreshing instance network info cache due to event network-changed-6691b1fe-ff5a-4e6c-88fa-00ca95260dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.460 254884 DEBUG oslo_concurrency.lockutils [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:17:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:44 compute-0 nova_compute[254880]: 2026-01-26 10:17:44.520 254884 DEBUG nova.network.neutron [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.918359) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422664918416, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2148, "num_deletes": 251, "total_data_size": 4348291, "memory_usage": 4431136, "flush_reason": "Manual Compaction"}
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 26 10:17:44 compute-0 ceph-mon[74456]: pgmap v1066: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422664937926, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4191624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29513, "largest_seqno": 31660, "table_properties": {"data_size": 4181901, "index_size": 6153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20235, "raw_average_key_size": 20, "raw_value_size": 4162471, "raw_average_value_size": 4221, "num_data_blocks": 264, "num_entries": 986, "num_filter_entries": 986, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422455, "oldest_key_time": 1769422455, "file_creation_time": 1769422664, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19604 microseconds, and 8443 cpu microseconds.
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.937971) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4191624 bytes OK
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.937991) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.940692) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.940705) EVENT_LOG_v1 {"time_micros": 1769422664940701, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.940720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4339532, prev total WAL file size 4339532, number of live WAL files 2.
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.941660) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4093KB)], [65(11MB)]
Jan 26 10:17:44 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422664941705, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16528883, "oldest_snapshot_seqno": -1}
Jan 26 10:17:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:44.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6224 keys, 14391601 bytes, temperature: kUnknown
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422665005717, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14391601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14350283, "index_size": 24632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 159366, "raw_average_key_size": 25, "raw_value_size": 14238500, "raw_average_value_size": 2287, "num_data_blocks": 990, "num_entries": 6224, "num_filter_entries": 6224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422664, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.006060) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14391601 bytes
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.007349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.7 rd, 224.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 11.8 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 6745, records dropped: 521 output_compression: NoCompression
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.007371) EVENT_LOG_v1 {"time_micros": 1769422665007361, "job": 36, "event": "compaction_finished", "compaction_time_micros": 64130, "compaction_time_cpu_micros": 27328, "output_level": 6, "num_output_files": 1, "total_output_size": 14391601, "num_input_records": 6745, "num_output_records": 6224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422665008289, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422665011066, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:44.941585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.011166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.011172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.011174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.011175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:17:45 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:17:45.011177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.079 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:45.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:45 compute-0 sudo[275089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:17:45 compute-0 sudo[275089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:45 compute-0 sudo[275089]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.626 254884 DEBUG nova.network.neutron [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updating instance_info_cache with network_info: [{"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.644 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Releasing lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.644 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Instance network_info: |[{"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.644 254884 DEBUG oslo_concurrency.lockutils [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.645 254884 DEBUG nova.network.neutron [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Refreshing network info cache for port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.647 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Start _get_guest_xml network_info=[{"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'device_type': 'disk', 'size': 0, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_options': None, 'image_id': '6789692f-fc1f-4efa-ae75-dcc13be695ef'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.651 254884 WARNING nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.654 254884 DEBUG nova.virt.libvirt.host [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.654 254884 DEBUG nova.virt.libvirt.host [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.661 254884 DEBUG nova.virt.libvirt.host [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.662 254884 DEBUG nova.virt.libvirt.host [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.662 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.662 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T10:05:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='57e1601b-dbfa-4d3b-8b96-27302e4a7a06',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T10:05:39Z,direct_url=<?>,disk_format='qcow2',id=6789692f-fc1f-4efa-ae75-dcc13be695ef,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3ff3fa2a5531460b993c609589aa545d',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T10:05:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.663 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.663 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.663 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.663 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.664 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.664 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.664 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.664 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.664 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.664 254884 DEBUG nova.virt.hardware [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 10:17:45 compute-0 nova_compute[254880]: 2026-01-26 10:17:45.667 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:17:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1005788338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/338352986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:17:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:17:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031763687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.178 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.207 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.212 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.602 254884 DEBUG nova.network.neutron [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updated VIF entry in instance network info cache for port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.602 254884 DEBUG nova.network.neutron [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updating instance_info_cache with network_info: [{"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.619 254884 DEBUG oslo_concurrency.lockutils [req-4549ccd3-d543-4d07-8ac1-d8b579576e86 req-3737a177-ba58-4e28-b041-d3f8a12e5011 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:17:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:46] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:17:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 10:17:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3541673696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:17:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:46] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.659 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.660 254884 DEBUG nova.virt.libvirt.vif [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:17:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2093418869',display_name='tempest-TestNetworkBasicOps-server-2093418869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2093418869',id=13,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr+w06vwdCkNN484Qwtgmpdo7M4YjCuObyKgkng1fQAWyr7p8R/5bYL0ujc7Bi2+Kkxy4U8CSzkndngkshmYGUSDooRUWI9TIUGG687sqjKLkkjY6hdtQLZjfvxs498lw==',key_name='tempest-TestNetworkBasicOps-55488525',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-bd1l0bad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:17:40Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=66b4bcb5-3da1-4f3e-818d-9ff52a3e5049,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.661 254884 DEBUG nova.network.os_vif_util [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.661 254884 DEBUG nova.network.os_vif_util [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:70:dc,bridge_name='br-int',has_traffic_filtering=True,id=6691b1fe-ff5a-4e6c-88fa-00ca95260dec,network=Network(d87aa6fc-537c-4182-8fe6-de299c89bce4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6691b1fe-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.662 254884 DEBUG nova.objects.instance [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.678 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] End _get_guest_xml xml=<domain type="kvm">
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <uuid>66b4bcb5-3da1-4f3e-818d-9ff52a3e5049</uuid>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <name>instance-0000000d</name>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <memory>131072</memory>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <vcpu>1</vcpu>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <metadata>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <nova:name>tempest-TestNetworkBasicOps-server-2093418869</nova:name>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <nova:creationTime>2026-01-26 10:17:45</nova:creationTime>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <nova:flavor name="m1.nano">
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:memory>128</nova:memory>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:disk>1</nova:disk>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:swap>0</nova:swap>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:vcpus>1</nova:vcpus>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       </nova:flavor>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <nova:owner>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:user uuid="c1208d3e25b940ea93fe76884c7a53db">tempest-TestNetworkBasicOps-966559857-project-member</nova:user>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:project uuid="6ed221b375a44fc2bb2a8f232c5446e7">tempest-TestNetworkBasicOps-966559857</nova:project>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       </nova:owner>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <nova:root type="image" uuid="6789692f-fc1f-4efa-ae75-dcc13be695ef"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <nova:ports>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <nova:port uuid="6691b1fe-ff5a-4e6c-88fa-00ca95260dec">
Jan 26 10:17:46 compute-0 nova_compute[254880]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         </nova:port>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       </nova:ports>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </nova:instance>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   </metadata>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <sysinfo type="smbios">
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <system>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <entry name="manufacturer">RDO</entry>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <entry name="product">OpenStack Compute</entry>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <entry name="serial">66b4bcb5-3da1-4f3e-818d-9ff52a3e5049</entry>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <entry name="uuid">66b4bcb5-3da1-4f3e-818d-9ff52a3e5049</entry>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <entry name="family">Virtual Machine</entry>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </system>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   </sysinfo>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <os>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <boot dev="hd"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <smbios mode="sysinfo"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   </os>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <features>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <acpi/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <apic/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <vmcoreinfo/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   </features>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <clock offset="utc">
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <timer name="hpet" present="no"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   </clock>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <cpu mode="host-model" match="exact">
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   </cpu>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   <devices>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <disk type="network" device="disk">
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk">
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       </source>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <target dev="vda" bus="virtio"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <disk type="network" device="cdrom">
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <driver type="raw" cache="none"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <source protocol="rbd" name="vms/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk.config">
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <host name="192.168.122.100" port="6789"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <host name="192.168.122.102" port="6789"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <host name="192.168.122.101" port="6789"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       </source>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <auth username="openstack">
Jan 26 10:17:46 compute-0 nova_compute[254880]:         <secret type="ceph" uuid="1a70b85d-e3fd-5814-8a6a-37ea00fcae30"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       </auth>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <target dev="sda" bus="sata"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </disk>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <interface type="ethernet">
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <mac address="fa:16:3e:31:70:dc"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <mtu size="1442"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <target dev="tap6691b1fe-ff"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </interface>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <serial type="pty">
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <log file="/var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/console.log" append="off"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </serial>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <video>
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <model type="virtio"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </video>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <input type="tablet" bus="usb"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <rng model="virtio">
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <backend model="random">/dev/urandom</backend>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </rng>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <controller type="usb" index="0"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     <memballoon model="virtio">
Jan 26 10:17:46 compute-0 nova_compute[254880]:       <stats period="10"/>
Jan 26 10:17:46 compute-0 nova_compute[254880]:     </memballoon>
Jan 26 10:17:46 compute-0 nova_compute[254880]:   </devices>
Jan 26 10:17:46 compute-0 nova_compute[254880]: </domain>
Jan 26 10:17:46 compute-0 nova_compute[254880]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.679 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Preparing to wait for external event network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.680 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.680 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.680 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.681 254884 DEBUG nova.virt.libvirt.vif [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T10:17:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2093418869',display_name='tempest-TestNetworkBasicOps-server-2093418869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2093418869',id=13,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr+w06vwdCkNN484Qwtgmpdo7M4YjCuObyKgkng1fQAWyr7p8R/5bYL0ujc7Bi2+Kkxy4U8CSzkndngkshmYGUSDooRUWI9TIUGG687sqjKLkkjY6hdtQLZjfvxs498lw==',key_name='tempest-TestNetworkBasicOps-55488525',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-bd1l0bad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T10:17:40Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=66b4bcb5-3da1-4f3e-818d-9ff52a3e5049,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.681 254884 DEBUG nova.network.os_vif_util [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.682 254884 DEBUG nova.network.os_vif_util [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:70:dc,bridge_name='br-int',has_traffic_filtering=True,id=6691b1fe-ff5a-4e6c-88fa-00ca95260dec,network=Network(d87aa6fc-537c-4182-8fe6-de299c89bce4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6691b1fe-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.682 254884 DEBUG os_vif [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:70:dc,bridge_name='br-int',has_traffic_filtering=True,id=6691b1fe-ff5a-4e6c-88fa-00ca95260dec,network=Network(d87aa6fc-537c-4182-8fe6-de299c89bce4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6691b1fe-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.683 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.683 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.684 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.687 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.687 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6691b1fe-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.688 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6691b1fe-ff, col_values=(('external_ids', {'iface-id': '6691b1fe-ff5a-4e6c-88fa-00ca95260dec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:70:dc', 'vm-uuid': '66b4bcb5-3da1-4f3e-818d-9ff52a3e5049'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.689 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:46 compute-0 NetworkManager[48970]: <info>  [1769422666.6901] manager: (tap6691b1fe-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.691 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.697 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.697 254884 INFO os_vif [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:70:dc,bridge_name='br-int',has_traffic_filtering=True,id=6691b1fe-ff5a-4e6c-88fa-00ca95260dec,network=Network(d87aa6fc-537c-4182-8fe6-de299c89bce4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6691b1fe-ff')
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.858 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.859 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.859 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] No VIF found with MAC fa:16:3e:31:70:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.860 254884 INFO nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Using config drive
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.887 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:17:46 compute-0 ceph-mon[74456]: pgmap v1067: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:17:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3031763687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:17:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3541673696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:17:46 compute-0 nova_compute[254880]: 2026-01-26 10:17:46.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:17:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:46.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:47.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.357 254884 INFO nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Creating config drive at /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/disk.config
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.362 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphk2krvrl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.487 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphk2krvrl" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.513 254884 DEBUG nova.storage.rbd_utils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] rbd image 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.516 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/disk.config 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.765 254884 DEBUG oslo_concurrency.processutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/disk.config 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.766 254884 INFO nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Deleting local config drive /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049/disk.config because it was imported into RBD.
Jan 26 10:17:47 compute-0 kernel: tap6691b1fe-ff: entered promiscuous mode
Jan 26 10:17:47 compute-0 NetworkManager[48970]: <info>  [1769422667.8184] manager: (tap6691b1fe-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Jan 26 10:17:47 compute-0 ovn_controller[155832]: 2026-01-26T10:17:47Z|00077|binding|INFO|Claiming lport 6691b1fe-ff5a-4e6c-88fa-00ca95260dec for this chassis.
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.818 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:47 compute-0 ovn_controller[155832]: 2026-01-26T10:17:47Z|00078|binding|INFO|6691b1fe-ff5a-4e6c-88fa-00ca95260dec: Claiming fa:16:3e:31:70:dc 10.100.0.4
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.826 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.838 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:70:dc 10.100.0.4'], port_security=['fa:16:3e:31:70:dc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '66b4bcb5-3da1-4f3e-818d-9ff52a3e5049', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93909c8-58d3-4249-87b1-29f4ada025eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cad66ad9-00da-4b9b-8ce3-9ca7cd41f24e, chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=6691b1fe-ff5a-4e6c-88fa-00ca95260dec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.840 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec in datapath d87aa6fc-537c-4182-8fe6-de299c89bce4 bound to our chassis
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.842 166625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d87aa6fc-537c-4182-8fe6-de299c89bce4
Jan 26 10:17:47 compute-0 systemd-udevd[275251]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:17:47 compute-0 systemd-machined[221254]: New machine qemu-5-instance-0000000d.
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.852 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7342a8-c7ee-4a53-8087-300ae2161453]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.853 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd87aa6fc-51 in ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.854 261020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd87aa6fc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.855 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[61e698e1-d676-47ac-9166-ad8fd717e276]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.855 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e59039-f33f-46a4-a166-6314f734817f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 NetworkManager[48970]: <info>  [1769422667.8622] device (tap6691b1fe-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 10:17:47 compute-0 NetworkManager[48970]: <info>  [1769422667.8630] device (tap6691b1fe-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.867 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[5847bd5e-7769-4990-8454-e98c81a4fbf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000d.
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.890 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.892 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[280274a7-bf56-4335-b349-77b26b2ad38a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 ovn_controller[155832]: 2026-01-26T10:17:47Z|00079|binding|INFO|Setting lport 6691b1fe-ff5a-4e6c-88fa-00ca95260dec ovn-installed in OVS
Jan 26 10:17:47 compute-0 ovn_controller[155832]: 2026-01-26T10:17:47Z|00080|binding|INFO|Setting lport 6691b1fe-ff5a-4e6c-88fa-00ca95260dec up in Southbound
Jan 26 10:17:47 compute-0 nova_compute[254880]: 2026-01-26 10:17:47.897 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.919 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[890e4a3d-a90b-4c29-83f2-b2803e3368da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 systemd-udevd[275255]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 10:17:47 compute-0 NetworkManager[48970]: <info>  [1769422667.9258] manager: (tapd87aa6fc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.925 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[743b822c-ea3e-4239-be9a-b43f32f06f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.959 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[1259e68d-49c8-447b-a6e6-7af626349ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.962 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[996832a5-764e-46db-a2c5-61d1e77d1bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:47 compute-0 NetworkManager[48970]: <info>  [1769422667.9799] device (tapd87aa6fc-50): carrier: link connected
Jan 26 10:17:47 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:47.984 261249 DEBUG oslo.privsep.daemon [-] privsep: reply[77c9a96e-3cee-45e0-a8b5-4466ca2c6f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.001 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[cba7b68b-9e99-423f-82ef-8eb053961bbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd87aa6fc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462163, 'reachable_time': 38758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275284, 'error': None, 'target': 'ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.013 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e8f9fb-4516-46ca-b5e5-bb5ca0670640]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:c8af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 462163, 'tstamp': 462163}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275285, 'error': None, 'target': 'ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.026 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[a177e3d9-8a6f-4f63-9cec-638ac83c913c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd87aa6fc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462163, 'reachable_time': 38758, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275286, 'error': None, 'target': 'ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.057 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[7e475b67-ee09-49d6-8911-26dcf379bfe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.107 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[41e87b6b-df7e-4493-acf5-628ba81d8496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.108 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd87aa6fc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.108 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.108 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd87aa6fc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.110 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:48 compute-0 kernel: tapd87aa6fc-50: entered promiscuous mode
Jan 26 10:17:48 compute-0 NetworkManager[48970]: <info>  [1769422668.1114] manager: (tapd87aa6fc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.113 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.113 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd87aa6fc-50, col_values=(('external_ids', {'iface-id': '2ca85ea2-a6ad-497f-b6d7-060239af18dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.114 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:48 compute-0 ovn_controller[155832]: 2026-01-26T10:17:48Z|00081|binding|INFO|Releasing lport 2ca85ea2-a6ad-497f-b6d7-060239af18dc from this chassis (sb_readonly=0)
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.127 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.128 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.128 166625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d87aa6fc-537c-4182-8fe6-de299c89bce4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d87aa6fc-537c-4182-8fe6-de299c89bce4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.129 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[db775b54-4377-43ab-83cc-8871c101548e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.130 166625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: global
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     log         /dev/log local0 debug
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     log-tag     haproxy-metadata-proxy-d87aa6fc-537c-4182-8fe6-de299c89bce4
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     user        root
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     group       root
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     maxconn     1024
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     pidfile     /var/lib/neutron/external/pids/d87aa6fc-537c-4182-8fe6-de299c89bce4.pid.haproxy
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     daemon
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: defaults
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     log global
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     mode http
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     option httplog
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     option dontlognull
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     option http-server-close
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     option forwardfor
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     retries                 3
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     timeout http-request    30s
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     timeout connect         30s
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     timeout client          32s
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     timeout server          32s
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     timeout http-keep-alive 30s
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: listen listener
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     bind 169.254.169.254:80
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:     http-request add-header X-OVN-Network-ID d87aa6fc-537c-4182-8fe6-de299c89bce4
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 10:17:48 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:48.130 166625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'env', 'PROCESS_TAG=haproxy-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d87aa6fc-537c-4182-8fe6-de299c89bce4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.360 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422668.3598268, 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.360 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] VM Started (Lifecycle Event)
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.385 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.388 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422668.3600154, 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.389 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] VM Paused (Lifecycle Event)
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.481 254884 DEBUG nova.compute.manager [req-eae571ea-9fab-43bf-8bf5-344c22b3af02 req-3d80f165-b196-4e26-948d-b52ad1ebe2f4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.481 254884 DEBUG oslo_concurrency.lockutils [req-eae571ea-9fab-43bf-8bf5-344c22b3af02 req-3d80f165-b196-4e26-948d-b52ad1ebe2f4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.481 254884 DEBUG oslo_concurrency.lockutils [req-eae571ea-9fab-43bf-8bf5-344c22b3af02 req-3d80f165-b196-4e26-948d-b52ad1ebe2f4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.481 254884 DEBUG oslo_concurrency.lockutils [req-eae571ea-9fab-43bf-8bf5-344c22b3af02 req-3d80f165-b196-4e26-948d-b52ad1ebe2f4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.481 254884 DEBUG nova.compute.manager [req-eae571ea-9fab-43bf-8bf5-344c22b3af02 req-3d80f165-b196-4e26-948d-b52ad1ebe2f4 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Processing event network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.482 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.485 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.488 254884 INFO nova.virt.libvirt.driver [-] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Instance spawned successfully.
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.488 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.493 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.496 254884 DEBUG nova.virt.driver [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] Emitting event <LifecycleEvent: 1769422668.4844928, 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.497 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] VM Resumed (Lifecycle Event)
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.510 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.510 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.511 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.511 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.512 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.512 254884 DEBUG nova.virt.libvirt.driver [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.533 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.536 254884 DEBUG nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 10:17:48 compute-0 podman[275360]: 2026-01-26 10:17:48.476069196 +0000 UTC m=+0.023894195 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.577 254884 INFO nova.compute.manager [None req-5138fc4f-4399-43cb-b1a0-517abe683547 - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 10:17:48 compute-0 podman[275360]: 2026-01-26 10:17:48.631473837 +0000 UTC m=+0.179298806 container create 1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.668 254884 INFO nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Took 8.22 seconds to spawn the instance on the hypervisor.
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.669 254884 DEBUG nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:17:48 compute-0 systemd[1]: Started libpod-conmon-1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00.scope.
Jan 26 10:17:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e82465fe467c1df4984e649514e4a856537121e8032ea642c9b04e7adf96bc4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 10:17:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:17:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.800 254884 INFO nova.compute.manager [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Took 10.28 seconds to build instance.
Jan 26 10:17:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:17:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:17:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:17:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:17:48 compute-0 nova_compute[254880]: 2026-01-26 10:17:48.816 254884 DEBUG oslo_concurrency.lockutils [None req-8bf0ec8e-b7b8-4309-99ae-de301befa6f6 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:17:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:17:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:48.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:17:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:48.853Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:17:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:48.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:17:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:48.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:49 compute-0 podman[275360]: 2026-01-26 10:17:49.003099941 +0000 UTC m=+0.550924950 container init 1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 10:17:49 compute-0 podman[275360]: 2026-01-26 10:17:49.009479752 +0000 UTC m=+0.557304731 container start 1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 10:17:49 compute-0 neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4[275376]: [NOTICE]   (275381) : New worker (275383) forked
Jan 26 10:17:49 compute-0 neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4[275376]: [NOTICE]   (275381) : Loading success.
Jan 26 10:17:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:17:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:49.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:17:49 compute-0 ceph-mon[74456]: pgmap v1068: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 10:17:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:17:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 26 10:17:50 compute-0 nova_compute[254880]: 2026-01-26 10:17:50.080 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:50 compute-0 ceph-mon[74456]: pgmap v1069: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 26 10:17:50 compute-0 nova_compute[254880]: 2026-01-26 10:17:50.760 254884 DEBUG nova.compute.manager [req-623a8d1f-579a-480e-986e-aa4b65c850b8 req-1b6353f0-6028-4f63-914e-f147f32db03c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:50 compute-0 nova_compute[254880]: 2026-01-26 10:17:50.760 254884 DEBUG oslo_concurrency.lockutils [req-623a8d1f-579a-480e-986e-aa4b65c850b8 req-1b6353f0-6028-4f63-914e-f147f32db03c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:50 compute-0 nova_compute[254880]: 2026-01-26 10:17:50.761 254884 DEBUG oslo_concurrency.lockutils [req-623a8d1f-579a-480e-986e-aa4b65c850b8 req-1b6353f0-6028-4f63-914e-f147f32db03c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:50 compute-0 nova_compute[254880]: 2026-01-26 10:17:50.761 254884 DEBUG oslo_concurrency.lockutils [req-623a8d1f-579a-480e-986e-aa4b65c850b8 req-1b6353f0-6028-4f63-914e-f147f32db03c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:50 compute-0 nova_compute[254880]: 2026-01-26 10:17:50.761 254884 DEBUG nova.compute.manager [req-623a8d1f-579a-480e-986e-aa4b65c850b8 req-1b6353f0-6028-4f63-914e-f147f32db03c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] No waiting events found dispatching network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:17:50 compute-0 nova_compute[254880]: 2026-01-26 10:17:50.761 254884 WARNING nova.compute.manager [req-623a8d1f-579a-480e-986e-aa4b65c850b8 req-1b6353f0-6028-4f63-914e-f147f32db03c b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received unexpected event network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec for instance with vm_state active and task_state None.
Jan 26 10:17:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:50.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:51 compute-0 sudo[275395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:17:51 compute-0 sudo[275395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:51 compute-0 sudo[275395]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:51 compute-0 sudo[275420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 10:17:51 compute-0 sudo[275420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:51.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:51 compute-0 podman[275510]: 2026-01-26 10:17:51.686452475 +0000 UTC m=+0.085118296 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:17:51 compute-0 nova_compute[254880]: 2026-01-26 10:17:51.689 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:51 compute-0 podman[275510]: 2026-01-26 10:17:51.810572919 +0000 UTC m=+0.209238710 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 10:17:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 26 10:17:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:52 compute-0 podman[275629]: 2026-01-26 10:17:52.276164308 +0000 UTC m=+0.058766845 container exec 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:17:52 compute-0 podman[275629]: 2026-01-26 10:17:52.285463936 +0000 UTC m=+0.068066463 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:17:52 compute-0 podman[275722]: 2026-01-26 10:17:52.744596253 +0000 UTC m=+0.156752064 container exec 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:17:52 compute-0 NetworkManager[48970]: <info>  [1769422672.8217] manager: (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 26 10:17:52 compute-0 NetworkManager[48970]: <info>  [1769422672.8233] manager: (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 26 10:17:52 compute-0 nova_compute[254880]: 2026-01-26 10:17:52.821 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:52 compute-0 ovn_controller[155832]: 2026-01-26T10:17:52Z|00082|binding|INFO|Releasing lport 2ca85ea2-a6ad-497f-b6d7-060239af18dc from this chassis (sb_readonly=0)
Jan 26 10:17:52 compute-0 podman[275722]: 2026-01-26 10:17:52.832415772 +0000 UTC m=+0.244571553 container exec_died 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 10:17:52 compute-0 nova_compute[254880]: 2026-01-26 10:17:52.869 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:52 compute-0 ovn_controller[155832]: 2026-01-26T10:17:52Z|00083|binding|INFO|Releasing lport 2ca85ea2-a6ad-497f-b6d7-060239af18dc from this chassis (sb_readonly=0)
Jan 26 10:17:52 compute-0 nova_compute[254880]: 2026-01-26 10:17:52.873 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000046s ======
Jan 26 10:17:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 26 10:17:53 compute-0 ceph-mon[74456]: pgmap v1070: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 26 10:17:53 compute-0 podman[275787]: 2026-01-26 10:17:53.104207375 +0000 UTC m=+0.121389341 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 10:17:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:53.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:53 compute-0 podman[275810]: 2026-01-26 10:17:53.403329032 +0000 UTC m=+0.281921883 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 10:17:53 compute-0 podman[275787]: 2026-01-26 10:17:53.463452338 +0000 UTC m=+0.480634274 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 10:17:53 compute-0 nova_compute[254880]: 2026-01-26 10:17:53.641 254884 DEBUG nova.compute.manager [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-changed-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:17:53 compute-0 nova_compute[254880]: 2026-01-26 10:17:53.642 254884 DEBUG nova.compute.manager [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Refreshing instance network info cache due to event network-changed-6691b1fe-ff5a-4e6c-88fa-00ca95260dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:17:53 compute-0 nova_compute[254880]: 2026-01-26 10:17:53.643 254884 DEBUG oslo_concurrency.lockutils [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:17:53 compute-0 nova_compute[254880]: 2026-01-26 10:17:53.643 254884 DEBUG oslo_concurrency.lockutils [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:17:53 compute-0 nova_compute[254880]: 2026-01-26 10:17:53.643 254884 DEBUG nova.network.neutron [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Refreshing network info cache for port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:17:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 26 10:17:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:54 compute-0 podman[275853]: 2026-01-26 10:17:54.544072945 +0000 UTC m=+0.925113504 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, name=keepalived)
Jan 26 10:17:54 compute-0 ceph-mon[74456]: pgmap v1071: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 26 10:17:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:54.703 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:17:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:54.704 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:17:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:17:54.704 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:17:54 compute-0 podman[275875]: 2026-01-26 10:17:54.802439632 +0000 UTC m=+0.236742568 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git)
Jan 26 10:17:54 compute-0 podman[275853]: 2026-01-26 10:17:54.807816239 +0000 UTC m=+1.188856788 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, distribution-scope=public, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 26 10:17:54 compute-0 podman[275889]: 2026-01-26 10:17:54.899932209 +0000 UTC m=+0.060681101 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 10:17:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:55 compute-0 nova_compute[254880]: 2026-01-26 10:17:55.083 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:55.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:55 compute-0 podman[275938]: 2026-01-26 10:17:55.331553217 +0000 UTC m=+0.364410596 container exec c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:17:55 compute-0 podman[275938]: 2026-01-26 10:17:55.603470702 +0000 UTC m=+0.636328001 container exec_died c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:17:55 compute-0 podman[276013]: 2026-01-26 10:17:55.918691468 +0000 UTC m=+0.098097102 container exec ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 10:17:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:17:56 compute-0 podman[276044]: 2026-01-26 10:17:56.14841048 +0000 UTC m=+0.055107219 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 10:17:56 compute-0 nova_compute[254880]: 2026-01-26 10:17:56.512 254884 DEBUG nova.network.neutron [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updated VIF entry in instance network info cache for port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:17:56 compute-0 nova_compute[254880]: 2026-01-26 10:17:56.513 254884 DEBUG nova.network.neutron [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updating instance_info_cache with network_info: [{"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:17:56 compute-0 nova_compute[254880]: 2026-01-26 10:17:56.582 254884 DEBUG oslo_concurrency.lockutils [req-4759cdaf-0526-4ec5-a5a9-1723dc34b906 req-2b1f8bc9-c0eb-4def-a35b-b7c3ab2f5701 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:17:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:56] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Jan 26 10:17:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:17:56] "GET /metrics HTTP/1.1" 200 48563 "" "Prometheus/2.51.0"
Jan 26 10:17:56 compute-0 ceph-mon[74456]: pgmap v1072: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 26 10:17:56 compute-0 podman[276013]: 2026-01-26 10:17:56.675661941 +0000 UTC m=+0.855067565 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 10:17:56 compute-0 nova_compute[254880]: 2026-01-26 10:17:56.691 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:17:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:17:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:17:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:17:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:17:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:17:57 compute-0 podman[276124]: 2026-01-26 10:17:57.128635222 +0000 UTC m=+0.074787913 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:17:57 compute-0 podman[276124]: 2026-01-26 10:17:57.166247978 +0000 UTC m=+0.112400699 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:17:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:57.187Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:17:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:57.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:57 compute-0 sudo[275420]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:17:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:57.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:17:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:57 compute-0 sudo[276167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:17:57 compute-0 sudo[276167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:57 compute-0 sudo[276167]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:57 compute-0 sudo[276192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:17:57 compute-0 sudo[276192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:17:57 compute-0 sudo[276192]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Jan 26 10:17:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:17:58 compute-0 sudo[276248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:17:58 compute-0 sudo[276248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:58 compute-0 sudo[276248]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:58 compute-0 sudo[276273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:17:58 compute-0 sudo[276273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:58 compute-0 ceph-mon[74456]: pgmap v1073: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: pgmap v1074: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3940202117' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3940202117' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:17:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:17:58 compute-0 podman[276340]: 2026-01-26 10:17:58.663781337 +0000 UTC m=+0.106080070 container create 5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swartz, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:17:58 compute-0 podman[276340]: 2026-01-26 10:17:58.579651236 +0000 UTC m=+0.021949989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:17:58 compute-0 systemd[1]: Started libpod-conmon-5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661.scope.
Jan 26 10:17:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:17:58 compute-0 podman[276340]: 2026-01-26 10:17:58.768078574 +0000 UTC m=+0.210377317 container init 5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:17:58 compute-0 podman[276340]: 2026-01-26 10:17:58.778913999 +0000 UTC m=+0.221212742 container start 5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:17:58 compute-0 goofy_swartz[276356]: 167 167
Jan 26 10:17:58 compute-0 systemd[1]: libpod-5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661.scope: Deactivated successfully.
Jan 26 10:17:58 compute-0 podman[276340]: 2026-01-26 10:17:58.796901713 +0000 UTC m=+0.239200446 container attach 5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swartz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 10:17:58 compute-0 podman[276340]: 2026-01-26 10:17:58.797940898 +0000 UTC m=+0.240239631 container died 5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swartz, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 10:17:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:17:58.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a51f77b21c8d9cd2e5fc4b79b9b8720486be1f60fe37bee1df88ba03d706bff-merged.mount: Deactivated successfully.
Jan 26 10:17:58 compute-0 podman[276340]: 2026-01-26 10:17:58.887365294 +0000 UTC m=+0.329664027 container remove 5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:17:58 compute-0 systemd[1]: libpod-conmon-5730502223b688ec7e1e8baae02b874c09f8bd946b9352889f25bbd8779e8661.scope: Deactivated successfully.
Jan 26 10:17:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:17:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:17:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:17:59 compute-0 podman[276383]: 2026-01-26 10:17:59.09348721 +0000 UTC m=+0.078470289 container create d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 10:17:59 compute-0 systemd[1]: Started libpod-conmon-d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e.scope.
Jan 26 10:17:59 compute-0 podman[276383]: 2026-01-26 10:17:59.039043638 +0000 UTC m=+0.024026737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:17:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eae2827fcae2d1f1f6ffb19f6e2b30cf564f3b07c7964674b9906f70fda500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eae2827fcae2d1f1f6ffb19f6e2b30cf564f3b07c7964674b9906f70fda500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eae2827fcae2d1f1f6ffb19f6e2b30cf564f3b07c7964674b9906f70fda500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eae2827fcae2d1f1f6ffb19f6e2b30cf564f3b07c7964674b9906f70fda500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eae2827fcae2d1f1f6ffb19f6e2b30cf564f3b07c7964674b9906f70fda500/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:17:59 compute-0 podman[276383]: 2026-01-26 10:17:59.212289818 +0000 UTC m=+0.197272927 container init d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:17:59 compute-0 podman[276383]: 2026-01-26 10:17:59.219641502 +0000 UTC m=+0.204624581 container start d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 10:17:59 compute-0 podman[276383]: 2026-01-26 10:17:59.223161395 +0000 UTC m=+0.208144504 container attach d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:17:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:17:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:17:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:17:59.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:17:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:17:59 compute-0 wizardly_feistel[276399]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:17:59 compute-0 wizardly_feistel[276399]: --> All data devices are unavailable
Jan 26 10:17:59 compute-0 systemd[1]: libpod-d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e.scope: Deactivated successfully.
Jan 26 10:17:59 compute-0 podman[276415]: 2026-01-26 10:17:59.595916626 +0000 UTC m=+0.023635717 container died d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_feistel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0eae2827fcae2d1f1f6ffb19f6e2b30cf564f3b07c7964674b9906f70fda500-merged.mount: Deactivated successfully.
Jan 26 10:17:59 compute-0 podman[276415]: 2026-01-26 10:17:59.702715902 +0000 UTC m=+0.130434973 container remove d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_feistel, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 10:17:59 compute-0 systemd[1]: libpod-conmon-d1d502a362b1901a08bc07fab814d2b21c19e8ec3cea09edbcbf2b95c7c91c1e.scope: Deactivated successfully.
Jan 26 10:17:59 compute-0 sudo[276273]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:59 compute-0 sudo[276430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:17:59 compute-0 sudo[276430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:17:59 compute-0 sudo[276430]: pam_unix(sudo:session): session closed for user root
Jan 26 10:17:59 compute-0 sudo[276455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:17:59 compute-0 sudo[276455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:18:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 572 KiB/s rd, 22 op/s
Jan 26 10:18:00 compute-0 nova_compute[254880]: 2026-01-26 10:18:00.122 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:00 compute-0 podman[276523]: 2026-01-26 10:18:00.264780613 +0000 UTC m=+0.061051398 container create 8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:18:00 compute-0 ceph-mon[74456]: pgmap v1075: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 572 KiB/s rd, 22 op/s
Jan 26 10:18:00 compute-0 podman[276523]: 2026-01-26 10:18:00.227740041 +0000 UTC m=+0.024010836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:18:00 compute-0 systemd[1]: Started libpod-conmon-8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5.scope.
Jan 26 10:18:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:18:00 compute-0 podman[276523]: 2026-01-26 10:18:00.531257162 +0000 UTC m=+0.327527967 container init 8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:18:00 compute-0 podman[276523]: 2026-01-26 10:18:00.53801068 +0000 UTC m=+0.334281455 container start 8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:18:00 compute-0 podman[276523]: 2026-01-26 10:18:00.541319448 +0000 UTC m=+0.337590253 container attach 8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:18:00 compute-0 musing_ishizaka[276541]: 167 167
Jan 26 10:18:00 compute-0 systemd[1]: libpod-8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5.scope: Deactivated successfully.
Jan 26 10:18:00 compute-0 conmon[276541]: conmon 8ccc7c1f084ab7276fc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5.scope/container/memory.events
Jan 26 10:18:00 compute-0 podman[276523]: 2026-01-26 10:18:00.544062953 +0000 UTC m=+0.340333728 container died 8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a6ef77c0d098923ddb165cccbe2bc589c63a56625ee19605a53f451b8c7a5cf-merged.mount: Deactivated successfully.
Jan 26 10:18:00 compute-0 podman[276523]: 2026-01-26 10:18:00.61485409 +0000 UTC m=+0.411124855 container remove 8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:18:00 compute-0 systemd[1]: libpod-conmon-8ccc7c1f084ab7276fc81abcebd5ff56c097b9a702c2f7c4559adfbe03d080d5.scope: Deactivated successfully.
Jan 26 10:18:00 compute-0 podman[276566]: 2026-01-26 10:18:00.789255339 +0000 UTC m=+0.049479277 container create 079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 26 10:18:00 compute-0 systemd[1]: Started libpod-conmon-079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9.scope.
Jan 26 10:18:00 compute-0 podman[276566]: 2026-01-26 10:18:00.766010111 +0000 UTC m=+0.026234069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:18:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48ac2de050942465f0e44c204bcf560b31ec1fe59d3e663d257b192af2806d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48ac2de050942465f0e44c204bcf560b31ec1fe59d3e663d257b192af2806d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48ac2de050942465f0e44c204bcf560b31ec1fe59d3e663d257b192af2806d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48ac2de050942465f0e44c204bcf560b31ec1fe59d3e663d257b192af2806d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:00 compute-0 podman[276566]: 2026-01-26 10:18:00.891101909 +0000 UTC m=+0.151325857 container init 079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:18:00 compute-0 podman[276566]: 2026-01-26 10:18:00.900870789 +0000 UTC m=+0.161094727 container start 079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:18:00 compute-0 podman[276566]: 2026-01-26 10:18:00.934580033 +0000 UTC m=+0.194803991 container attach 079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 10:18:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:01 compute-0 dreamy_germain[276583]: {
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:     "0": [
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:         {
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "devices": [
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "/dev/loop3"
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             ],
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "lv_name": "ceph_lv0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "lv_size": "21470642176",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "name": "ceph_lv0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "tags": {
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.cluster_name": "ceph",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.crush_device_class": "",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.encrypted": "0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.osd_id": "0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.type": "block",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.vdo": "0",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:                 "ceph.with_tpm": "0"
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             },
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "type": "block",
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:             "vg_name": "ceph_vg0"
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:         }
Jan 26 10:18:01 compute-0 dreamy_germain[276583]:     ]
Jan 26 10:18:01 compute-0 dreamy_germain[276583]: }
Jan 26 10:18:01 compute-0 systemd[1]: libpod-079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9.scope: Deactivated successfully.
Jan 26 10:18:01 compute-0 podman[276566]: 2026-01-26 10:18:01.22012961 +0000 UTC m=+0.480353548 container died 079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:18:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:18:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:01.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f48ac2de050942465f0e44c204bcf560b31ec1fe59d3e663d257b192af2806d7-merged.mount: Deactivated successfully.
Jan 26 10:18:01 compute-0 podman[276566]: 2026-01-26 10:18:01.362689758 +0000 UTC m=+0.622913696 container remove 079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 10:18:01 compute-0 systemd[1]: libpod-conmon-079883b295ec377a6e5e9751f9ea91b9d4bc707e205f0b8f58b11502956519e9.scope: Deactivated successfully.
Jan 26 10:18:01 compute-0 sudo[276455]: pam_unix(sudo:session): session closed for user root
Jan 26 10:18:01 compute-0 sudo[276606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:18:01 compute-0 sudo[276606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:18:01 compute-0 sudo[276606]: pam_unix(sudo:session): session closed for user root
Jan 26 10:18:01 compute-0 sudo[276631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:18:01 compute-0 sudo[276631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:18:01 compute-0 nova_compute[254880]: 2026-01-26 10:18:01.693 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:01 compute-0 sshd-session[276656]: Invalid user zabbix from 157.245.76.178 port 60938
Jan 26 10:18:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 572 KiB/s rd, 22 op/s
Jan 26 10:18:02 compute-0 podman[276697]: 2026-01-26 10:18:01.950175018 +0000 UTC m=+0.025203075 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:18:02 compute-0 podman[276697]: 2026-01-26 10:18:02.045562595 +0000 UTC m=+0.120590632 container create 066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dubinsky, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:18:02 compute-0 sshd-session[276656]: Connection closed by invalid user zabbix 157.245.76.178 port 60938 [preauth]
Jan 26 10:18:02 compute-0 systemd[1]: Started libpod-conmon-066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829.scope.
Jan 26 10:18:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:18:02 compute-0 podman[276697]: 2026-01-26 10:18:02.192081697 +0000 UTC m=+0.267109844 container init 066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dubinsky, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 10:18:02 compute-0 podman[276697]: 2026-01-26 10:18:02.201434927 +0000 UTC m=+0.276462964 container start 066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:18:02 compute-0 podman[276697]: 2026-01-26 10:18:02.205535843 +0000 UTC m=+0.280563980 container attach 066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:18:02 compute-0 musing_dubinsky[276713]: 167 167
Jan 26 10:18:02 compute-0 systemd[1]: libpod-066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829.scope: Deactivated successfully.
Jan 26 10:18:02 compute-0 podman[276697]: 2026-01-26 10:18:02.210061361 +0000 UTC m=+0.285089398 container died 066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 10:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb0056741ea37deaab975dacaac2767fc76142762f39a5fbb8dc8013c3656ee2-merged.mount: Deactivated successfully.
Jan 26 10:18:02 compute-0 podman[276697]: 2026-01-26 10:18:02.271724263 +0000 UTC m=+0.346752300 container remove 066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_dubinsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 26 10:18:02 compute-0 systemd[1]: libpod-conmon-066657863dbfc59969a26cd01aa6e6bd81d0599435685d12deda8caacd2a9829.scope: Deactivated successfully.
Jan 26 10:18:02 compute-0 ovn_controller[155832]: 2026-01-26T10:18:02Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:31:70:dc 10.100.0.4
Jan 26 10:18:02 compute-0 ovn_controller[155832]: 2026-01-26T10:18:02Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:31:70:dc 10.100.0.4
Jan 26 10:18:02 compute-0 podman[276736]: 2026-01-26 10:18:02.509973636 +0000 UTC m=+0.114449017 container create 65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:18:02 compute-0 podman[276736]: 2026-01-26 10:18:02.422461864 +0000 UTC m=+0.026937275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:18:02 compute-0 systemd[1]: Started libpod-conmon-65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3.scope.
Jan 26 10:18:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caccf58847391cd2ce075360e2c66160520ebaefa1c392fac332bee063142ff7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caccf58847391cd2ce075360e2c66160520ebaefa1c392fac332bee063142ff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caccf58847391cd2ce075360e2c66160520ebaefa1c392fac332bee063142ff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caccf58847391cd2ce075360e2c66160520ebaefa1c392fac332bee063142ff7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:18:02 compute-0 podman[276736]: 2026-01-26 10:18:02.628577729 +0000 UTC m=+0.233053130 container init 65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:18:02 compute-0 podman[276736]: 2026-01-26 10:18:02.636520997 +0000 UTC m=+0.240996398 container start 65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 10:18:02 compute-0 podman[276736]: 2026-01-26 10:18:02.66169466 +0000 UTC m=+0.266170071 container attach 65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:18:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:03 compute-0 lvm[276828]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:18:03 compute-0 lvm[276828]: VG ceph_vg0 finished
Jan 26 10:18:03 compute-0 awesome_fermat[276753]: {}
Jan 26 10:18:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:18:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:03.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:18:03 compute-0 systemd[1]: libpod-65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3.scope: Deactivated successfully.
Jan 26 10:18:03 compute-0 systemd[1]: libpod-65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3.scope: Consumed 1.068s CPU time.
Jan 26 10:18:03 compute-0 podman[276736]: 2026-01-26 10:18:03.325868786 +0000 UTC m=+0.930344217 container died 65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-caccf58847391cd2ce075360e2c66160520ebaefa1c392fac332bee063142ff7-merged.mount: Deactivated successfully.
Jan 26 10:18:03 compute-0 podman[276736]: 2026-01-26 10:18:03.426528047 +0000 UTC m=+1.031003428 container remove 65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 10:18:03 compute-0 systemd[1]: libpod-conmon-65539ed6567bfdff3e7ee2748c13175d7311e0160c3064255cf46e12c87b87c3.scope: Deactivated successfully.
Jan 26 10:18:03 compute-0 sudo[276631]: pam_unix(sudo:session): session closed for user root
Jan 26 10:18:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:18:03 compute-0 ceph-mon[74456]: pgmap v1076: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 572 KiB/s rd, 22 op/s
Jan 26 10:18:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:18:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:18:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:18:03 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:18:03 compute-0 sudo[276847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:18:03 compute-0 sudo[276847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:18:03 compute-0 sudo[276847]: pam_unix(sudo:session): session closed for user root
Jan 26 10:18:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 572 KiB/s rd, 22 op/s
Jan 26 10:18:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:18:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:18:04 compute-0 ceph-mon[74456]: pgmap v1077: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 572 KiB/s rd, 22 op/s
Jan 26 10:18:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:18:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:18:05 compute-0 nova_compute[254880]: 2026-01-26 10:18:05.124 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:05.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:05 compute-0 sudo[276874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:18:05 compute-0 sudo[276874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:18:05 compute-0 sudo[276874]: pam_unix(sudo:session): session closed for user root
Jan 26 10:18:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 26 10:18:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:06] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Jan 26 10:18:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:06] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Jan 26 10:18:06 compute-0 nova_compute[254880]: 2026-01-26 10:18:06.695 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:07 compute-0 ceph-mon[74456]: pgmap v1078: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 26 10:18:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:07.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 26 10:18:08 compute-0 podman[276901]: 2026-01-26 10:18:08.156044483 +0000 UTC m=+0.082654438 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 10:18:08 compute-0 nova_compute[254880]: 2026-01-26 10:18:08.438 254884 INFO nova.compute.manager [None req-eda726db-1e5a-456d-89c9-cf2099822ea8 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Get console output
Jan 26 10:18:08 compute-0 nova_compute[254880]: 2026-01-26 10:18:08.443 268147 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 10:18:08 compute-0 ceph-mon[74456]: pgmap v1079: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 26 10:18:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:08.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:09.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:09 compute-0 ovn_controller[155832]: 2026-01-26T10:18:09Z|00084|binding|INFO|Releasing lport 2ca85ea2-a6ad-497f-b6d7-060239af18dc from this chassis (sb_readonly=0)
Jan 26 10:18:09 compute-0 nova_compute[254880]: 2026-01-26 10:18:09.504 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:09 compute-0 ovn_controller[155832]: 2026-01-26T10:18:09Z|00085|binding|INFO|Releasing lport 2ca85ea2-a6ad-497f-b6d7-060239af18dc from this chassis (sb_readonly=0)
Jan 26 10:18:09 compute-0 nova_compute[254880]: 2026-01-26 10:18:09.565 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:18:10 compute-0 nova_compute[254880]: 2026-01-26 10:18:10.125 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:10 compute-0 ceph-mon[74456]: pgmap v1080: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:18:10 compute-0 nova_compute[254880]: 2026-01-26 10:18:10.887 254884 INFO nova.compute.manager [None req-6e34cd94-cbe2-40b4-947a-065728cecb08 c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Get console output
Jan 26 10:18:10 compute-0 nova_compute[254880]: 2026-01-26 10:18:10.892 268147 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 10:18:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:18:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:11.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:18:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:11.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:11 compute-0 nova_compute[254880]: 2026-01-26 10:18:11.680 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:11 compute-0 NetworkManager[48970]: <info>  [1769422691.6808] manager: (patch-br-int-to-provnet-94d9950f-5cf2-4813-9455-dd14377245f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 26 10:18:11 compute-0 NetworkManager[48970]: <info>  [1769422691.6819] manager: (patch-provnet-94d9950f-5cf2-4813-9455-dd14377245f4-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Jan 26 10:18:11 compute-0 nova_compute[254880]: 2026-01-26 10:18:11.697 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:11 compute-0 ovn_controller[155832]: 2026-01-26T10:18:11Z|00086|binding|INFO|Releasing lport 2ca85ea2-a6ad-497f-b6d7-060239af18dc from this chassis (sb_readonly=0)
Jan 26 10:18:11 compute-0 nova_compute[254880]: 2026-01-26 10:18:11.739 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:11 compute-0 nova_compute[254880]: 2026-01-26 10:18:11.741 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:18:12 compute-0 nova_compute[254880]: 2026-01-26 10:18:12.031 254884 INFO nova.compute.manager [None req-2070dae4-62f5-4824-8e42-f449542c7c9e c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Get console output
Jan 26 10:18:12 compute-0 nova_compute[254880]: 2026-01-26 10:18:12.036 268147 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 10:18:12 compute-0 ceph-mon[74456]: pgmap v1081: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:18:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:12.552 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:18:12 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:12.553 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:18:12 compute-0 nova_compute[254880]: 2026-01-26 10:18:12.553 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:13.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:13.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.446 254884 DEBUG nova.compute.manager [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-changed-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.447 254884 DEBUG nova.compute.manager [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Refreshing instance network info cache due to event network-changed-6691b1fe-ff5a-4e6c-88fa-00ca95260dec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.447 254884 DEBUG oslo_concurrency.lockutils [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.447 254884 DEBUG oslo_concurrency.lockutils [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquired lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.448 254884 DEBUG nova.network.neutron [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Refreshing network info cache for port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 10:18:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:13.555 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.709 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.709 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.709 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.710 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.710 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.711 254884 INFO nova.compute.manager [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Terminating instance
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.713 254884 DEBUG nova.compute.manager [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 10:18:13 compute-0 kernel: tap6691b1fe-ff (unregistering): left promiscuous mode
Jan 26 10:18:13 compute-0 NetworkManager[48970]: <info>  [1769422693.7730] device (tap6691b1fe-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 10:18:13 compute-0 ovn_controller[155832]: 2026-01-26T10:18:13Z|00087|binding|INFO|Releasing lport 6691b1fe-ff5a-4e6c-88fa-00ca95260dec from this chassis (sb_readonly=0)
Jan 26 10:18:13 compute-0 ovn_controller[155832]: 2026-01-26T10:18:13Z|00088|binding|INFO|Setting lport 6691b1fe-ff5a-4e6c-88fa-00ca95260dec down in Southbound
Jan 26 10:18:13 compute-0 ovn_controller[155832]: 2026-01-26T10:18:13Z|00089|binding|INFO|Removing iface tap6691b1fe-ff ovn-installed in OVS
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.789 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:13.797 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:70:dc 10.100.0.4'], port_security=['fa:16:3e:31:70:dc 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '66b4bcb5-3da1-4f3e-818d-9ff52a3e5049', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ed221b375a44fc2bb2a8f232c5446e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93909c8-58d3-4249-87b1-29f4ada025eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cad66ad9-00da-4b9b-8ce3-9ca7cd41f24e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>], logical_port=6691b1fe-ff5a-4e6c-88fa-00ca95260dec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb847c367c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:18:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:13.799 166625 INFO neutron.agent.ovn.metadata.agent [-] Port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec in datapath d87aa6fc-537c-4182-8fe6-de299c89bce4 unbound from our chassis
Jan 26 10:18:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:13.801 166625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d87aa6fc-537c-4182-8fe6-de299c89bce4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 10:18:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:13.802 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba5ef44-3a92-4337-87df-c024811e0dc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:13 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:13.802 166625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4 namespace which is not needed anymore
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.813 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:13 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 26 10:18:13 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Consumed 13.254s CPU time.
Jan 26 10:18:13 compute-0 systemd-machined[221254]: Machine qemu-5-instance-0000000d terminated.
Jan 26 10:18:13 compute-0 neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4[275376]: [NOTICE]   (275381) : haproxy version is 2.8.14-c23fe91
Jan 26 10:18:13 compute-0 neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4[275376]: [NOTICE]   (275381) : path to executable is /usr/sbin/haproxy
Jan 26 10:18:13 compute-0 neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4[275376]: [WARNING]  (275381) : Exiting Master process...
Jan 26 10:18:13 compute-0 neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4[275376]: [ALERT]    (275381) : Current worker (275383) exited with code 143 (Terminated)
Jan 26 10:18:13 compute-0 neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4[275376]: [WARNING]  (275381) : All workers exited. Exiting... (0)
Jan 26 10:18:13 compute-0 systemd[1]: libpod-1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00.scope: Deactivated successfully.
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.950 254884 INFO nova.virt.libvirt.driver [-] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Instance destroyed successfully.
Jan 26 10:18:13 compute-0 nova_compute[254880]: 2026-01-26 10:18:13.951 254884 DEBUG nova.objects.instance [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lazy-loading 'resources' on Instance uuid 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 10:18:13 compute-0 podman[276962]: 2026-01-26 10:18:13.951914281 +0000 UTC m=+0.052828825 container died 1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00-userdata-shm.mount: Deactivated successfully.
Jan 26 10:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e82465fe467c1df4984e649514e4a856537121e8032ea642c9b04e7adf96bc4-merged.mount: Deactivated successfully.
Jan 26 10:18:13 compute-0 podman[276962]: 2026-01-26 10:18:13.992040226 +0000 UTC m=+0.092954770 container cleanup 1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:18:13 compute-0 systemd[1]: libpod-conmon-1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00.scope: Deactivated successfully.
Jan 26 10:18:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:18:14 compute-0 podman[277003]: 2026-01-26 10:18:14.056146006 +0000 UTC m=+0.043482245 container remove 1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.062 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e7de2c12-20b6-4c47-908b-da121537fe5e]: (4, ('Mon Jan 26 10:18:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4 (1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00)\n1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00\nMon Jan 26 10:18:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4 (1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00)\n1c598b70fb5785bd3ed093c7bb2b09a8eb78031bda3cf472fdbf8a1844dbbf00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.063 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[b720b6ed-0e33-4a97-aaef-3758381e21b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.064 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd87aa6fc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.095 254884 DEBUG nova.virt.libvirt.vif [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T10:17:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2093418869',display_name='tempest-TestNetworkBasicOps-server-2093418869',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2093418869',id=13,image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr+w06vwdCkNN484Qwtgmpdo7M4YjCuObyKgkng1fQAWyr7p8R/5bYL0ujc7Bi2+Kkxy4U8CSzkndngkshmYGUSDooRUWI9TIUGG687sqjKLkkjY6hdtQLZjfvxs498lw==',key_name='tempest-TestNetworkBasicOps-55488525',keypairs=<?>,launch_index=0,launched_at=2026-01-26T10:17:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ed221b375a44fc2bb2a8f232c5446e7',ramdisk_id='',reservation_id='r-bd1l0bad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6789692f-fc1f-4efa-ae75-dcc13be695ef',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-966559857',owner_user_name='tempest-TestNetworkBasicOps-966559857-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T10:17:48Z,user_data=None,user_id='c1208d3e25b940ea93fe76884c7a53db',uuid=66b4bcb5-3da1-4f3e-818d-9ff52a3e5049,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.095 254884 DEBUG nova.network.os_vif_util [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converting VIF {"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.096 254884 DEBUG nova.network.os_vif_util [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:31:70:dc,bridge_name='br-int',has_traffic_filtering=True,id=6691b1fe-ff5a-4e6c-88fa-00ca95260dec,network=Network(d87aa6fc-537c-4182-8fe6-de299c89bce4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6691b1fe-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.097 254884 DEBUG os_vif [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:70:dc,bridge_name='br-int',has_traffic_filtering=True,id=6691b1fe-ff5a-4e6c-88fa-00ca95260dec,network=Network(d87aa6fc-537c-4182-8fe6-de299c89bce4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6691b1fe-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.098 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.098 254884 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6691b1fe-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.111 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:14 compute-0 kernel: tapd87aa6fc-50: left promiscuous mode
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.116 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.131 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.134 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[745a1f55-9daa-4fdd-b64f-4ca99df09f71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:14 compute-0 nova_compute[254880]: 2026-01-26 10:18:14.136 254884 INFO os_vif [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:70:dc,bridge_name='br-int',has_traffic_filtering=True,id=6691b1fe-ff5a-4e6c-88fa-00ca95260dec,network=Network(d87aa6fc-537c-4182-8fe6-de299c89bce4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6691b1fe-ff')
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.149 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e663cb-394e-4c4e-8d67-580c1eebe642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.151 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c25e82-945e-4c3d-a5c5-53d6684bcd3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.165 261020 DEBUG oslo.privsep.daemon [-] privsep: reply[7decad88-7e2e-4d20-901a-68401ebac47e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462157, 'reachable_time': 30469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277033, 'error': None, 'target': 'ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dd87aa6fc\x2d537c\x2d4182\x2d8fe6\x2dde299c89bce4.mount: Deactivated successfully.
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.170 167020 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d87aa6fc-537c-4182-8fe6-de299c89bce4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 10:18:14 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:14.170 167020 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1e63f0-c2dc-49f5-aa0d-db0bcdf1ba0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 10:18:14 compute-0 ceph-mon[74456]: pgmap v1082: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 10:18:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:15.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.128 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.513 254884 DEBUG nova.network.neutron [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updated VIF entry in instance network info cache for port 6691b1fe-ff5a-4e6c-88fa-00ca95260dec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.514 254884 DEBUG nova.network.neutron [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updating instance_info_cache with network_info: [{"id": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "address": "fa:16:3e:31:70:dc", "network": {"id": "d87aa6fc-537c-4182-8fe6-de299c89bce4", "bridge": "br-int", "label": "tempest-network-smoke--1656254383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ed221b375a44fc2bb2a8f232c5446e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6691b1fe-ff", "ovs_interfaceid": "6691b1fe-ff5a-4e6c-88fa-00ca95260dec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.679 254884 DEBUG oslo_concurrency.lockutils [req-437e6311-df92-4fff-b5bf-5bb2899d423f req-5b5c56db-6291-400c-9b48-4c2148337307 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Releasing lock "refresh_cache-66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.686 254884 DEBUG nova.compute.manager [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-vif-unplugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.687 254884 DEBUG oslo_concurrency.lockutils [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.687 254884 DEBUG oslo_concurrency.lockutils [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.688 254884 DEBUG oslo_concurrency.lockutils [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.688 254884 DEBUG nova.compute.manager [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] No waiting events found dispatching network-vif-unplugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.688 254884 DEBUG nova.compute.manager [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-vif-unplugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.689 254884 DEBUG nova.compute.manager [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.690 254884 DEBUG oslo_concurrency.lockutils [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Acquiring lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.690 254884 DEBUG oslo_concurrency.lockutils [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.691 254884 DEBUG oslo_concurrency.lockutils [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.692 254884 DEBUG nova.compute.manager [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] No waiting events found dispatching network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 10:18:15 compute-0 nova_compute[254880]: 2026-01-26 10:18:15.693 254884 WARNING nova.compute.manager [req-d9c82f9c-a4cc-4b34-9cd0-a09b4822298c req-5ba1d55c-7fbe-496b-961a-e098daf31262 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received unexpected event network-vif-plugged-6691b1fe-ff5a-4e6c-88fa-00ca95260dec for instance with vm_state active and task_state deleting.
Jan 26 10:18:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:18:16 compute-0 ceph-mon[74456]: pgmap v1083: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 10:18:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:16] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Jan 26 10:18:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:16] "GET /metrics HTTP/1.1" 200 48554 "" "Prometheus/2.51.0"
Jan 26 10:18:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:17.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:17 compute-0 nova_compute[254880]: 2026-01-26 10:18:17.042 254884 INFO nova.virt.libvirt.driver [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Deleting instance files /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_del
Jan 26 10:18:17 compute-0 nova_compute[254880]: 2026-01-26 10:18:17.044 254884 INFO nova.virt.libvirt.driver [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Deletion of /var/lib/nova/instances/66b4bcb5-3da1-4f3e-818d-9ff52a3e5049_del complete
Jan 26 10:18:17 compute-0 nova_compute[254880]: 2026-01-26 10:18:17.142 254884 INFO nova.compute.manager [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Took 3.43 seconds to destroy the instance on the hypervisor.
Jan 26 10:18:17 compute-0 nova_compute[254880]: 2026-01-26 10:18:17.143 254884 DEBUG oslo.service.loopingcall [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 10:18:17 compute-0 nova_compute[254880]: 2026-01-26 10:18:17.143 254884 DEBUG nova.compute.manager [-] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 10:18:17 compute-0 nova_compute[254880]: 2026-01-26 10:18:17.143 254884 DEBUG nova.network.neutron [-] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 10:18:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:17.189Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:18:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:17.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:18:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:17.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:18:18
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', '.nfs', 'vms', 'backups']
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:18:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:18:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:18:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:18:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:18.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:19.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:19 compute-0 ceph-mon[74456]: pgmap v1084: 353 pgs: 353 active+clean; 121 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 26 10:18:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:19 compute-0 nova_compute[254880]: 2026-01-26 10:18:19.113 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:18:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:18:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:19.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.130 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:20 compute-0 ceph-mon[74456]: pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.306 254884 DEBUG nova.network.neutron [-] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.329 254884 INFO nova.compute.manager [-] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Took 3.19 seconds to deallocate network for instance.
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.393 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.394 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.398 254884 DEBUG nova.compute.manager [req-21960c2b-d06d-48e3-a20a-1a1a5d4d2e4a req-c8183aff-2ed6-4f96-83c0-8a06c91001d1 b3cedad3bffb466c8c89f0c66461ccc7 d522de7bb1e84f808e55320745abb962 - - default default] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Received event network-vif-deleted-6691b1fe-ff5a-4e6c-88fa-00ca95260dec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.457 254884 DEBUG oslo_concurrency.processutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:18:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:18:20 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394289376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.931 254884 DEBUG oslo_concurrency.processutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.941 254884 DEBUG nova.compute.provider_tree [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.964 254884 DEBUG nova.scheduler.client.report [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:18:20 compute-0 nova_compute[254880]: 2026-01-26 10:18:20.997 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:21.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:21 compute-0 nova_compute[254880]: 2026-01-26 10:18:21.037 254884 INFO nova.scheduler.client.report [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Deleted allocations for instance 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049
Jan 26 10:18:21 compute-0 nova_compute[254880]: 2026-01-26 10:18:21.110 254884 DEBUG oslo_concurrency.lockutils [None req-ab64cc7a-a414-4a1b-ae66-c7f4d985f2dd c1208d3e25b940ea93fe76884c7a53db 6ed221b375a44fc2bb2a8f232c5446e7 - - default default] Lock "66b4bcb5-3da1-4f3e-818d-9ff52a3e5049" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:21 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1394289376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:21.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:18:22 compute-0 ceph-mon[74456]: pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:18:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:23.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:23.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:18:24 compute-0 nova_compute[254880]: 2026-01-26 10:18:24.069 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:24 compute-0 nova_compute[254880]: 2026-01-26 10:18:24.115 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:24 compute-0 nova_compute[254880]: 2026-01-26 10:18:24.151 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:25.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:25 compute-0 ceph-mon[74456]: pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 10:18:25 compute-0 nova_compute[254880]: 2026-01-26 10:18:25.132 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:25 compute-0 podman[277073]: 2026-01-26 10:18:25.169312079 +0000 UTC m=+0.088230229 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 26 10:18:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:25.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:25 compute-0 sudo[277093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:18:25 compute-0 sudo[277093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:18:25 compute-0 sudo[277093]: pam_unix(sudo:session): session closed for user root
Jan 26 10:18:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 26 10:18:26 compute-0 ceph-mon[74456]: pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 26 10:18:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:26] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 26 10:18:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:26] "GET /metrics HTTP/1.1" 200 48551 "" "Prometheus/2.51.0"
Jan 26 10:18:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:27.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:18:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 7218 writes, 31K keys, 7217 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7218 writes, 7217 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1558 writes, 6651 keys, 1558 commit groups, 1.0 writes per commit group, ingest: 11.70 MB, 0.02 MB/s
                                           Interval WAL: 1558 writes, 1558 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     84.3      0.61              0.13        18    0.034       0      0       0.0       0.0
                                             L6      1/0   13.72 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    167.1    142.5      1.55              0.51        17    0.091     93K   9520       0.0       0.0
                                            Sum      1/0   13.72 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3    119.6    126.0      2.16              0.65        35    0.062     93K   9520       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.7    175.6    180.1      0.38              0.15         8    0.047     26K   2582       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    167.1    142.5      1.55              0.51        17    0.091     93K   9520       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    145.4      0.36              0.13        17    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.051, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.27 GB write, 0.11 MB/s write, 0.25 GB read, 0.11 MB/s read, 2.2 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a9cd69b350#2 capacity: 304.00 MB usage: 22.73 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000165 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1226,22.00 MB,7.23558%) FilterBlock(36,272.17 KB,0.0874319%) IndexBlock(36,482.81 KB,0.155098%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 10:18:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:27.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:18:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:27.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:27.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 26 10:18:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:28 compute-0 nova_compute[254880]: 2026-01-26 10:18:28.948 254884 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769422693.947429, 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 10:18:28 compute-0 nova_compute[254880]: 2026-01-26 10:18:28.948 254884 INFO nova.compute.manager [-] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] VM Stopped (Lifecycle Event)
Jan 26 10:18:28 compute-0 nova_compute[254880]: 2026-01-26 10:18:28.968 254884 DEBUG nova.compute.manager [None req-fa6f1652-70f1-40c7-a891-8284f305592c - - - - - -] [instance: 66b4bcb5-3da1-4f3e-818d-9ff52a3e5049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 10:18:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:29.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:29 compute-0 ceph-mon[74456]: pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 26 10:18:29 compute-0 nova_compute[254880]: 2026-01-26 10:18:29.118 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:29.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 26 10:18:30 compute-0 nova_compute[254880]: 2026-01-26 10:18:30.135 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:31 compute-0 ceph-mon[74456]: pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 26 10:18:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:31.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:32 compute-0 ceph-mon[74456]: pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:33.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:33.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:18:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:34 compute-0 nova_compute[254880]: 2026-01-26 10:18:34.123 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:34 compute-0 ceph-mon[74456]: pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:35.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:35 compute-0 nova_compute[254880]: 2026-01-26 10:18:35.136 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:18:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:35.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:18:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:36] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:18:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:36] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:18:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:37.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:37 compute-0 ceph-mon[74456]: pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:37.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:37.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:37 compute-0 nova_compute[254880]: 2026-01-26 10:18:37.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:37 compute-0 nova_compute[254880]: 2026-01-26 10:18:37.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:37 compute-0 nova_compute[254880]: 2026-01-26 10:18:37.977 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:37 compute-0 nova_compute[254880]: 2026-01-26 10:18:37.978 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:37 compute-0 nova_compute[254880]: 2026-01-26 10:18:37.978 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:37 compute-0 nova_compute[254880]: 2026-01-26 10:18:37.978 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:18:37 compute-0 nova_compute[254880]: 2026-01-26 10:18:37.978 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:18:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:38 compute-0 ceph-mon[74456]: pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:18:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2256897382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.472 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:18:38 compute-0 podman[277154]: 2026-01-26 10:18:38.599616179 +0000 UTC m=+0.085153734 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.644 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.645 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4557MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.646 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.646 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.721 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.722 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:18:38 compute-0 nova_compute[254880]: 2026-01-26 10:18:38.739 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:18:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:38.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:39.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:39 compute-0 nova_compute[254880]: 2026-01-26 10:18:39.125 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:18:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793596853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:39 compute-0 nova_compute[254880]: 2026-01-26 10:18:39.156 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:18:39 compute-0 nova_compute[254880]: 2026-01-26 10:18:39.162 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:18:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2256897382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/793596853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:39 compute-0 nova_compute[254880]: 2026-01-26 10:18:39.180 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:18:39 compute-0 nova_compute[254880]: 2026-01-26 10:18:39.216 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:18:39 compute-0 nova_compute[254880]: 2026-01-26 10:18:39.217 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:39.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:40 compute-0 nova_compute[254880]: 2026-01-26 10:18:40.138 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:40 compute-0 ceph-mon[74456]: pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:40 compute-0 nova_compute[254880]: 2026-01-26 10:18:40.217 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:40 compute-0 nova_compute[254880]: 2026-01-26 10:18:40.217 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:18:40 compute-0 nova_compute[254880]: 2026-01-26 10:18:40.218 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:18:40 compute-0 nova_compute[254880]: 2026-01-26 10:18:40.243 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:18:40 compute-0 nova_compute[254880]: 2026-01-26 10:18:40.244 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:40 compute-0 nova_compute[254880]: 2026-01-26 10:18:40.980 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:41.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2704312734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:41.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:41 compute-0 nova_compute[254880]: 2026-01-26 10:18:41.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:41 compute-0 nova_compute[254880]: 2026-01-26 10:18:41.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:42 compute-0 ceph-mon[74456]: pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2708101103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:42 compute-0 sshd-session[277205]: Invalid user zabbix from 157.245.76.178 port 48324
Jan 26 10:18:42 compute-0 sshd-session[277205]: Connection closed by invalid user zabbix 157.245.76.178 port 48324 [preauth]
Jan 26 10:18:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:43.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:43.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:43 compute-0 nova_compute[254880]: 2026-01-26 10:18:43.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:44 compute-0 nova_compute[254880]: 2026-01-26 10:18:44.128 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:45.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:45 compute-0 ceph-mon[74456]: pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:45 compute-0 nova_compute[254880]: 2026-01-26 10:18:45.139 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:45.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:45 compute-0 sudo[277211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:18:45 compute-0 sudo[277211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:18:45 compute-0 sudo[277211]: pam_unix(sudo:session): session closed for user root
Jan 26 10:18:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:46 compute-0 ceph-mon[74456]: pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:46] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:18:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:46] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:18:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:47.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:47.193Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:18:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:47.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3089679829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3843976593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:18:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:47.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:47 compute-0 nova_compute[254880]: 2026-01-26 10:18:47.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:18:47 compute-0 nova_compute[254880]: 2026-01-26 10:18:47.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:18:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:48 compute-0 ceph-mon[74456]: pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:18:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:18:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:18:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:18:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:18:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:18:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:18:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:48.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:49.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:49 compute-0 nova_compute[254880]: 2026-01-26 10:18:49.132 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:18:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:50 compute-0 nova_compute[254880]: 2026-01-26 10:18:50.142 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:50 compute-0 ceph-mon[74456]: pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:51.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:51.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:52 compute-0 ceph-mon[74456]: pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:53.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:53.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:54 compute-0 nova_compute[254880]: 2026-01-26 10:18:54.134 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:18:54 compute-0 ovn_controller[155832]: 2026-01-26T10:18:54Z|00090|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Jan 26 10:18:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:54.704 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:18:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:54.704 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:18:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:18:54.704 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:18:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:55.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:55 compute-0 ceph-mon[74456]: pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:55 compute-0 nova_compute[254880]: 2026-01-26 10:18:55.145 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:55.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:56 compute-0 podman[277247]: 2026-01-26 10:18:56.119102614 +0000 UTC m=+0.053701281 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:18:56 compute-0 ceph-mon[74456]: pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:18:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:56] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:18:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:18:56] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:18:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:18:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:18:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:18:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:18:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:18:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:57.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:57.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:57.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:18:58.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:18:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:18:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:18:59.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:18:59 compute-0 ceph-mon[74456]: pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:18:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1448684425' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:18:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1448684425' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:18:59 compute-0 nova_compute[254880]: 2026-01-26 10:18:59.138 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:18:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:18:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:18:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:18:59.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:18:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:00 compute-0 ceph-mon[74456]: pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:00 compute-0 nova_compute[254880]: 2026-01-26 10:19:00.189 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:01.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:03.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:03 compute-0 ceph-mon[74456]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:03.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:19:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:04 compute-0 nova_compute[254880]: 2026-01-26 10:19:04.140 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:04 compute-0 sudo[277276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:19:04 compute-0 sudo[277276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:04 compute-0 sudo[277276]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:04 compute-0 ceph-mon[74456]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:04 compute-0 sudo[277301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:19:04 compute-0 sudo[277301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:04 compute-0 sudo[277301]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:19:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:19:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:19:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 26 10:19:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:19:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:19:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:19:04 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:19:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:19:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:04 compute-0 sudo[277359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:19:04 compute-0 sudo[277359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:04 compute-0 sudo[277359]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:05 compute-0 sudo[277384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:19:05 compute-0 sudo[277384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:05.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:05 compute-0 nova_compute[254880]: 2026-01-26 10:19:05.191 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:19:05 compute-0 ceph-mon[74456]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 26 10:19:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:19:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:19:05 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:05 compute-0 podman[277449]: 2026-01-26 10:19:05.377463994 +0000 UTC m=+0.036472363 container create 4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 10:19:05 compute-0 systemd[1]: Started libpod-conmon-4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075.scope.
Jan 26 10:19:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:05.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:19:05 compute-0 podman[277449]: 2026-01-26 10:19:05.452345975 +0000 UTC m=+0.111354354 container init 4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sammet, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:19:05 compute-0 podman[277449]: 2026-01-26 10:19:05.361408404 +0000 UTC m=+0.020416793 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:19:05 compute-0 podman[277449]: 2026-01-26 10:19:05.460311044 +0000 UTC m=+0.119319413 container start 4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sammet, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 10:19:05 compute-0 podman[277449]: 2026-01-26 10:19:05.463806246 +0000 UTC m=+0.122814625 container attach 4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sammet, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 10:19:05 compute-0 cool_sammet[277465]: 167 167
Jan 26 10:19:05 compute-0 systemd[1]: libpod-4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075.scope: Deactivated successfully.
Jan 26 10:19:05 compute-0 podman[277449]: 2026-01-26 10:19:05.466668243 +0000 UTC m=+0.125676612 container died 4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sammet, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a129d445e8fda7cde9c8c68fff1c313ae7021d81646f3f3f7f345df58a218185-merged.mount: Deactivated successfully.
Jan 26 10:19:05 compute-0 podman[277449]: 2026-01-26 10:19:05.51305743 +0000 UTC m=+0.172065799 container remove 4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_sammet, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 10:19:05 compute-0 systemd[1]: libpod-conmon-4c91facaa61a6e095cc6dd101907aeadcee48db75c991a5fe94caa64aa0a0075.scope: Deactivated successfully.
Jan 26 10:19:05 compute-0 sshd-session[277274]: Invalid user ubuntu from 117.50.196.2 port 60778
Jan 26 10:19:05 compute-0 podman[277492]: 2026-01-26 10:19:05.686423731 +0000 UTC m=+0.054310146 container create 10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 26 10:19:05 compute-0 systemd[1]: Started libpod-conmon-10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0.scope.
Jan 26 10:19:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:19:05 compute-0 podman[277492]: 2026-01-26 10:19:05.66822619 +0000 UTC m=+0.036112625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b9ce2e41815cc0cd0831822b624bfbc6402550d88a6b21d7a0d523ac75cb3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b9ce2e41815cc0cd0831822b624bfbc6402550d88a6b21d7a0d523ac75cb3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b9ce2e41815cc0cd0831822b624bfbc6402550d88a6b21d7a0d523ac75cb3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b9ce2e41815cc0cd0831822b624bfbc6402550d88a6b21d7a0d523ac75cb3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b9ce2e41815cc0cd0831822b624bfbc6402550d88a6b21d7a0d523ac75cb3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:05 compute-0 sshd-session[277274]: Received disconnect from 117.50.196.2 port 60778:11:  [preauth]
Jan 26 10:19:05 compute-0 sshd-session[277274]: Disconnected from invalid user ubuntu 117.50.196.2 port 60778 [preauth]
Jan 26 10:19:05 compute-0 podman[277492]: 2026-01-26 10:19:05.784799578 +0000 UTC m=+0.152685993 container init 10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:19:05 compute-0 podman[277492]: 2026-01-26 10:19:05.79295723 +0000 UTC m=+0.160843645 container start 10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:19:05 compute-0 podman[277492]: 2026-01-26 10:19:05.796954665 +0000 UTC m=+0.164841100 container attach 10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:19:05 compute-0 sudo[277514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:19:05 compute-0 sudo[277514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:05 compute-0 sudo[277514]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:06 compute-0 trusting_haibt[277509]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:19:06 compute-0 trusting_haibt[277509]: --> All data devices are unavailable
Jan 26 10:19:06 compute-0 systemd[1]: libpod-10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0.scope: Deactivated successfully.
Jan 26 10:19:06 compute-0 podman[277549]: 2026-01-26 10:19:06.239655864 +0000 UTC m=+0.025593696 container died 10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_haibt, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 10:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-60b9ce2e41815cc0cd0831822b624bfbc6402550d88a6b21d7a0d523ac75cb3d-merged.mount: Deactivated successfully.
Jan 26 10:19:06 compute-0 podman[277549]: 2026-01-26 10:19:06.280255964 +0000 UTC m=+0.066193776 container remove 10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:19:06 compute-0 systemd[1]: libpod-conmon-10be7937e5d902c9865154e50fb73261f6d7e9d237f41fe9052e4c201bb345d0.scope: Deactivated successfully.
Jan 26 10:19:06 compute-0 sudo[277384]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:06 compute-0 sudo[277565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:19:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 26 10:19:06 compute-0 sudo[277565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:06 compute-0 sudo[277565]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:06 compute-0 sudo[277590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:19:06 compute-0 sudo[277590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:06] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:19:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:06] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:19:06 compute-0 podman[277656]: 2026-01-26 10:19:06.809302597 +0000 UTC m=+0.040281464 container create a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:19:06 compute-0 systemd[1]: Started libpod-conmon-a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad.scope.
Jan 26 10:19:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:19:06 compute-0 podman[277656]: 2026-01-26 10:19:06.792008728 +0000 UTC m=+0.022987625 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:19:06 compute-0 podman[277656]: 2026-01-26 10:19:06.895640719 +0000 UTC m=+0.126619586 container init a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 26 10:19:06 compute-0 podman[277656]: 2026-01-26 10:19:06.902354888 +0000 UTC m=+0.133333765 container start a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 10:19:06 compute-0 podman[277656]: 2026-01-26 10:19:06.905644435 +0000 UTC m=+0.136623322 container attach a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:19:06 compute-0 xenodochial_hermann[277673]: 167 167
Jan 26 10:19:06 compute-0 systemd[1]: libpod-a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad.scope: Deactivated successfully.
Jan 26 10:19:06 compute-0 podman[277656]: 2026-01-26 10:19:06.907693264 +0000 UTC m=+0.138672131 container died a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9350ce79ae11ab6b87eadd0aceeb2d8af62e5cedea4166f09eedc805c4c12747-merged.mount: Deactivated successfully.
Jan 26 10:19:06 compute-0 podman[277656]: 2026-01-26 10:19:06.945119339 +0000 UTC m=+0.176098196 container remove a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 26 10:19:06 compute-0 systemd[1]: libpod-conmon-a3e629d095d938bca31cb827f231f31246ada4f9785e240b62f50531b1cce1ad.scope: Deactivated successfully.
Jan 26 10:19:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:07.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:07 compute-0 podman[277697]: 2026-01-26 10:19:07.11346972 +0000 UTC m=+0.049064451 container create f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hamilton, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 26 10:19:07 compute-0 systemd[1]: Started libpod-conmon-f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa.scope.
Jan 26 10:19:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1575424fb218c0f888d57fbbae0e0f0e8b0d5ae40ea293068edbeb3471adec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1575424fb218c0f888d57fbbae0e0f0e8b0d5ae40ea293068edbeb3471adec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1575424fb218c0f888d57fbbae0e0f0e8b0d5ae40ea293068edbeb3471adec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1575424fb218c0f888d57fbbae0e0f0e8b0d5ae40ea293068edbeb3471adec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:07 compute-0 podman[277697]: 2026-01-26 10:19:07.088611623 +0000 UTC m=+0.024206394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:19:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:07.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:07 compute-0 podman[277697]: 2026-01-26 10:19:07.199058855 +0000 UTC m=+0.134653596 container init f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hamilton, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:19:07 compute-0 podman[277697]: 2026-01-26 10:19:07.205314153 +0000 UTC m=+0.140908914 container start f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hamilton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:19:07 compute-0 podman[277697]: 2026-01-26 10:19:07.210204389 +0000 UTC m=+0.145799140 container attach f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hamilton, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:19:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:07.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]: {
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:     "0": [
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:         {
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "devices": [
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "/dev/loop3"
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             ],
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "lv_name": "ceph_lv0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "lv_size": "21470642176",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "name": "ceph_lv0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "tags": {
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.cluster_name": "ceph",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.crush_device_class": "",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.encrypted": "0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.osd_id": "0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.type": "block",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.vdo": "0",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:                 "ceph.with_tpm": "0"
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             },
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "type": "block",
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:             "vg_name": "ceph_vg0"
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:         }
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]:     ]
Jan 26 10:19:07 compute-0 vigilant_hamilton[277714]: }
Jan 26 10:19:07 compute-0 systemd[1]: libpod-f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa.scope: Deactivated successfully.
Jan 26 10:19:07 compute-0 podman[277697]: 2026-01-26 10:19:07.502224654 +0000 UTC m=+0.437819385 container died f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Jan 26 10:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d1575424fb218c0f888d57fbbae0e0f0e8b0d5ae40ea293068edbeb3471adec-merged.mount: Deactivated successfully.
Jan 26 10:19:07 compute-0 podman[277697]: 2026-01-26 10:19:07.547898205 +0000 UTC m=+0.483492936 container remove f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 10:19:07 compute-0 systemd[1]: libpod-conmon-f06d4920477297299d7d9b5641652f79a62435790c9561db30d53fa45c4989aa.scope: Deactivated successfully.
Jan 26 10:19:07 compute-0 sudo[277590]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:07 compute-0 sudo[277733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:19:07 compute-0 sudo[277733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:07 compute-0 sudo[277733]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:07 compute-0 sudo[277758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:19:07 compute-0 sudo[277758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:07 compute-0 ceph-mon[74456]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:08 compute-0 podman[277824]: 2026-01-26 10:19:08.152261107 +0000 UTC m=+0.044184135 container create 86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:19:08 compute-0 systemd[1]: Started libpod-conmon-86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4.scope.
Jan 26 10:19:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:19:08 compute-0 podman[277824]: 2026-01-26 10:19:08.135682666 +0000 UTC m=+0.027605714 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:19:08 compute-0 podman[277824]: 2026-01-26 10:19:08.235279062 +0000 UTC m=+0.127202190 container init 86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 10:19:08 compute-0 podman[277824]: 2026-01-26 10:19:08.247381167 +0000 UTC m=+0.139304235 container start 86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:19:08 compute-0 podman[277824]: 2026-01-26 10:19:08.252452917 +0000 UTC m=+0.144375985 container attach 86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 10:19:08 compute-0 hardcore_hopper[277840]: 167 167
Jan 26 10:19:08 compute-0 systemd[1]: libpod-86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4.scope: Deactivated successfully.
Jan 26 10:19:08 compute-0 conmon[277840]: conmon 86b9de834c6c6b343bf1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4.scope/container/memory.events
Jan 26 10:19:08 compute-0 podman[277824]: 2026-01-26 10:19:08.255781096 +0000 UTC m=+0.147704134 container died 86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-480e6cf10fceac14f35f42a13f0c8ab1beeb88f2694e61e481cc47de9275297f-merged.mount: Deactivated successfully.
Jan 26 10:19:08 compute-0 podman[277824]: 2026-01-26 10:19:08.298339453 +0000 UTC m=+0.190262481 container remove 86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:19:08 compute-0 systemd[1]: libpod-conmon-86b9de834c6c6b343bf11bc8dbb34448244afee9326bb411f9ab175eba8e16d4.scope: Deactivated successfully.
Jan 26 10:19:08 compute-0 podman[277868]: 2026-01-26 10:19:08.487600849 +0000 UTC m=+0.064832984 container create 6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hugle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:19:08 compute-0 systemd[1]: Started libpod-conmon-6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7.scope.
Jan 26 10:19:08 compute-0 podman[277868]: 2026-01-26 10:19:08.455134621 +0000 UTC m=+0.032366846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:19:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00047850f17136ad09d74e9eb8d742b840e5a74f15328422feece0391343f7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00047850f17136ad09d74e9eb8d742b840e5a74f15328422feece0391343f7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00047850f17136ad09d74e9eb8d742b840e5a74f15328422feece0391343f7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00047850f17136ad09d74e9eb8d742b840e5a74f15328422feece0391343f7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:19:08 compute-0 podman[277868]: 2026-01-26 10:19:08.594961938 +0000 UTC m=+0.172194153 container init 6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:19:08 compute-0 podman[277868]: 2026-01-26 10:19:08.607782871 +0000 UTC m=+0.185015016 container start 6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 26 10:19:08 compute-0 podman[277868]: 2026-01-26 10:19:08.611319504 +0000 UTC m=+0.188551649 container attach 6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hugle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:19:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:08.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:09.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:09 compute-0 nova_compute[254880]: 2026-01-26 10:19:09.142 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:09 compute-0 podman[277929]: 2026-01-26 10:19:09.174872173 +0000 UTC m=+0.108338414 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:19:09 compute-0 lvm[277984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:19:09 compute-0 lvm[277984]: VG ceph_vg0 finished
Jan 26 10:19:09 compute-0 happy_hugle[277885]: {}
Jan 26 10:19:09 compute-0 systemd[1]: libpod-6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7.scope: Deactivated successfully.
Jan 26 10:19:09 compute-0 podman[277868]: 2026-01-26 10:19:09.373019028 +0000 UTC m=+0.950251193 container died 6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 26 10:19:09 compute-0 systemd[1]: libpod-6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7.scope: Consumed 1.151s CPU time.
Jan 26 10:19:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:19:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:09.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:19:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e00047850f17136ad09d74e9eb8d742b840e5a74f15328422feece0391343f7c-merged.mount: Deactivated successfully.
Jan 26 10:19:09 compute-0 podman[277868]: 2026-01-26 10:19:09.61030908 +0000 UTC m=+1.187541215 container remove 6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hugle, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:19:09 compute-0 systemd[1]: libpod-conmon-6023dfb3d4121dda41b4b73fbf34089f9e3a4ee786a7689cd557b3c478d6aff7.scope: Deactivated successfully.
Jan 26 10:19:09 compute-0 sudo[277758]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:19:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:19:09 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:09 compute-0 sudo[278003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:19:09 compute-0 sudo[278003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:09 compute-0 sudo[278003]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:10 compute-0 ceph-mon[74456]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:10 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:10 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:19:10 compute-0 nova_compute[254880]: 2026-01-26 10:19:10.225 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:11.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:19:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:11.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:19:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:12 compute-0 ceph-mon[74456]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:13.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:13.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:14 compute-0 ceph-mon[74456]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:19:14 compute-0 nova_compute[254880]: 2026-01-26 10:19:14.145 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.545220) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422754545255, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1059, "num_deletes": 250, "total_data_size": 1875232, "memory_usage": 1908352, "flush_reason": "Manual Compaction"}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422754555681, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1805113, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31661, "largest_seqno": 32719, "table_properties": {"data_size": 1799984, "index_size": 2589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10304, "raw_average_key_size": 18, "raw_value_size": 1789742, "raw_average_value_size": 3156, "num_data_blocks": 114, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422665, "oldest_key_time": 1769422665, "file_creation_time": 1769422754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10510 microseconds, and 4131 cpu microseconds.
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.555725) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1805113 bytes OK
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.555746) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.557821) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.557837) EVENT_LOG_v1 {"time_micros": 1769422754557832, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.557853) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1870377, prev total WAL file size 1870377, number of live WAL files 2.
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.558619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353033' seq:0, type:0; will stop at (end)
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1762KB)], [68(13MB)]
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422754558670, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 16196714, "oldest_snapshot_seqno": -1}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6277 keys, 14950056 bytes, temperature: kUnknown
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422754784326, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 14950056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14907682, "index_size": 25560, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 162197, "raw_average_key_size": 25, "raw_value_size": 14794154, "raw_average_value_size": 2356, "num_data_blocks": 1015, "num_entries": 6277, "num_filter_entries": 6277, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.784567) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 14950056 bytes
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.817075) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.8 rd, 66.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.7 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(17.3) write-amplify(8.3) OK, records in: 6791, records dropped: 514 output_compression: NoCompression
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.817126) EVENT_LOG_v1 {"time_micros": 1769422754817108, "job": 38, "event": "compaction_finished", "compaction_time_micros": 225724, "compaction_time_cpu_micros": 27802, "output_level": 6, "num_output_files": 1, "total_output_size": 14950056, "num_input_records": 6791, "num_output_records": 6277, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422754817682, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422754820473, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.558538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.820545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.820551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.820553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.820555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:19:14 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:19:14.820557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:19:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 26 10:19:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:15.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:15 compute-0 nova_compute[254880]: 2026-01-26 10:19:15.227 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:15 compute-0 ceph-mon[74456]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 26 10:19:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:16] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:19:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:16] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:19:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:17.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:17.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:17 compute-0 ceph-mon[74456]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:19:18
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['images', 'default.rgw.control', '.mgr', 'vms', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:19:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:19:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:19:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:18.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:19.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:19 compute-0 nova_compute[254880]: 2026-01-26 10:19:19.149 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:19:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:19:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:19.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:19 compute-0 ceph-mon[74456]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:20 compute-0 nova_compute[254880]: 2026-01-26 10:19:20.281 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:21.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:21.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:22 compute-0 ceph-mon[74456]: pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:23 compute-0 sshd-session[278044]: Accepted publickey for zuul from 192.168.122.10 port 47990 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 10:19:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:23.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:23 compute-0 systemd-logind[787]: New session 56 of user zuul.
Jan 26 10:19:23 compute-0 systemd[1]: Started Session 56 of User zuul.
Jan 26 10:19:23 compute-0 sshd-session[278044]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 10:19:23 compute-0 sshd-session[278042]: Invalid user zabbix from 157.245.76.178 port 60908
Jan 26 10:19:23 compute-0 sudo[278048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 26 10:19:23 compute-0 sudo[278048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:19:23 compute-0 sshd-session[278042]: Connection closed by invalid user zabbix 157.245.76.178 port 60908 [preauth]
Jan 26 10:19:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:23.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:24 compute-0 ceph-mon[74456]: pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:24 compute-0 nova_compute[254880]: 2026-01-26 10:19:24.178 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000022s ======
Jan 26 10:19:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:25.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 26 10:19:25 compute-0 nova_compute[254880]: 2026-01-26 10:19:25.308 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:25.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:25 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26147 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:25 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16542 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:25 compute-0 sudo[278227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:19:25 compute-0 sudo[278227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:25 compute-0 sudo[278227]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:26 compute-0 ceph-mon[74456]: pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:26 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25795 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:26 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26162 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:26 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16554 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:19:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:19:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 26 10:19:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2457933205' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:19:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:26 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25810 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:27 compute-0 ceph-mon[74456]: from='client.26147 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:27 compute-0 ceph-mon[74456]: from='client.16542 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4108331622' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:19:27 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2457933205' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:19:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:27.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:27 compute-0 podman[278324]: 2026-01-26 10:19:27.13848596 +0000 UTC m=+0.063603356 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:19:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:27.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:27.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:28 compute-0 ceph-mon[74456]: from='client.25795 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:28 compute-0 ceph-mon[74456]: from='client.26162 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:28 compute-0 ceph-mon[74456]: from='client.16554 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:28 compute-0 ceph-mon[74456]: pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:28 compute-0 ceph-mon[74456]: from='client.25810 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:28 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2766876846' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:19:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:28.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:29 compute-0 nova_compute[254880]: 2026-01-26 10:19:29.181 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:29 compute-0 ceph-mon[74456]: pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:30 compute-0 nova_compute[254880]: 2026-01-26 10:19:30.310 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:31.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:31 compute-0 ovs-vsctl[278470]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 26 10:19:31 compute-0 ceph-mon[74456]: pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:32 compute-0 virtqemud[254348]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 26 10:19:32 compute-0 virtqemud[254348]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 26 10:19:32 compute-0 virtqemud[254348]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 26 10:19:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:33.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:33 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: cache status {prefix=cache status} (starting...)
Jan 26 10:19:33 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:33 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: client ls {prefix=client ls} (starting...)
Jan 26 10:19:33 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:33 compute-0 lvm[278823]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:19:33 compute-0 lvm[278823]: VG ceph_vg0 finished
Jan 26 10:19:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:19:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:33 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26186 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mon[74456]: pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16578 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:34 compute-0 nova_compute[254880]: 2026-01-26 10:19:34.183 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: damage ls {prefix=damage ls} (starting...)
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 26 10:19:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26198 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump loads {prefix=dump loads} (starting...)
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 26 10:19:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883529240' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:34 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16593 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26204 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:19:34 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218341045' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 26 10:19:34 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:34 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16605 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26219 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mon[74456]: from='client.26186 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1630855317' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3883529240' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1100227087' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1218341045' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25840 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:35.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:35 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 26 10:19:35 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 26 10:19:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2585676288' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:19:35 compute-0 nova_compute[254880]: 2026-01-26 10:19:35.311 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:35 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 26 10:19:35 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 26 10:19:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16620 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:35 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: ops {prefix=ops} (starting...)
Jan 26 10:19:35 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 26 10:19:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817716702' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25867 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 26 10:19:35 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767220968' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:19:35 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26258 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.16578 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.26198 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.16593 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.26204 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.16605 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.26219 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.25840 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2569982764' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2585676288' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1939179752' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2035422281' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/109052100' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1817716702' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3839696754' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2767220968' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2878709684' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16638 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25876 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: session ls {prefix=session ls} (starting...)
Jan 26 10:19:36 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:19:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 26 10:19:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039591914' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26276 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: status {prefix=status} (starting...)
Jan 26 10:19:36 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16653 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:36] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:19:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:36] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:19:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 26 10:19:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2871500887' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 26 10:19:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:36 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 26 10:19:36 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174436668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:37 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.16620 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.25855 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.25867 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.26258 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.16638 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/355643123' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4039591914' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3164859405' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2820007' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3683707489' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2871500887' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1762731804' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1146455129' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1174436668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:37.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:37.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 26 10:19:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508688286' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25936 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 26 10:19:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/637312991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:19:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:37.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 10:19:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156067595' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26330 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mgr[74755]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:19:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T10:19:37.705+0000 7ff0f59d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:19:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 26 10:19:37 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16698 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:37 compute-0 ceph-mgr[74755]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:19:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T10:19:37.835+0000 7ff0f59d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:19:37 compute-0 nova_compute[254880]: 2026-01-26 10:19:37.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:37 compute-0 nova_compute[254880]: 2026-01-26 10:19:37.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:37 compute-0 nova_compute[254880]: 2026-01-26 10:19:37.989 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:19:37 compute-0 nova_compute[254880]: 2026-01-26 10:19:37.990 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:19:37 compute-0 nova_compute[254880]: 2026-01-26 10:19:37.990 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:19:37 compute-0 nova_compute[254880]: 2026-01-26 10:19:37.990 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:19:37 compute-0 nova_compute[254880]: 2026-01-26 10:19:37.990 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:19:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 26 10:19:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4176106981' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.25876 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.26276 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.16653 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.25921 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3257461002' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1508688286' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/130987033' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/655899828' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/637312991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1056870647' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3972199707' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2156067595' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3912734262' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4235847919' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 26 10:19:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990822071' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26366 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:19:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779557225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.577 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:19:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 26 10:19:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117934623' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.25990 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T10:19:38.730+0000 7ff0f59d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:19:38 compute-0 ceph-mgr[74755]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.737 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.738 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4227MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.739 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.739 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.864 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.864 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:19:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:38.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 26 10:19:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165636823' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:19:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:38 compute-0 nova_compute[254880]: 2026-01-26 10:19:38.890 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:19:38 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26390 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16755 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:39.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:39 compute-0 nova_compute[254880]: 2026-01-26 10:19:39.186 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:39 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26023 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:19:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248639188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 26 10:19:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545354252' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16785 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 26 10:19:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 26 10:19:39 compute-0 nova_compute[254880]: 2026-01-26 10:19:39.471 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:19:39 compute-0 nova_compute[254880]: 2026-01-26 10:19:39.478 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.25936 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.26330 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.16698 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4231684812' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2147534676' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4176106981' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/325511667' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1990822071' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2903306104' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.26366 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/779557225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3812039934' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1117934623' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.25990 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4165636823' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3940556563' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.26390 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.16755 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4000464440' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1421307840' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:39 compute-0 nova_compute[254880]: 2026-01-26 10:19:39.614 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:19:39 compute-0 nova_compute[254880]: 2026-01-26 10:19:39.616 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:19:39 compute-0 nova_compute[254880]: 2026-01-26 10:19:39.616 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:19:39 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26044 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16803 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 26 10:19:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794654331' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:05.945064+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 6217728 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:06.945468+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 6217728 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:07.945685+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 6209536 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:08.945877+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971856 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 6209536 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:09.946021+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 6201344 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd604c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:10.946173+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 6201344 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,1])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:11.946350+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82968576 unmapped: 6201344 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:12.946486+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82976768 unmapped: 6193152 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:13.946869+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971265 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 6184960 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:14.947098+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 6176768 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:15.947448+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 6176768 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:16.947728+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 6168576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:17.947954+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 6168576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:18.948164+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971265 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 6168576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.924120903s of 13.943492889s, submitted: 5
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:19.948243+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 6160384 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:20.948351+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 6160384 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:21.948513+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 6152192 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:22.948661+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 6152192 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:23.948782+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970674 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 6152192 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:24.948929+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 6144000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:25.949113+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 6144000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:26.950068+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 6152192 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:27.950267+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 6144000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf5400 session 0x55c5bfd72780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf5000 session 0x55c5bff8d4a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:28.950405+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970542 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 6144000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:29.950540+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 6135808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:30.950656+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 6135808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:31.950825+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 6127616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:32.950953+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 6127616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:33.951089+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970542 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 6127616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:34.951319+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 6119424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:35.951479+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 6119424 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:36.953273+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 6111232 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:37.953426+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 6111232 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:38.953617+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970542 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 6111232 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd6c2c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.988218307s of 19.993902206s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:39.953749+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 6078464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:40.953863+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 6078464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:41.953982+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 6070272 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:42.954100+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 6062080 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:43.954280+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972186 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 6062080 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:44.954414+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 6053888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:45.954525+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 6053888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:46.954660+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 6053888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:47.954864+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 6045696 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:48.955047+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972186 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 6037504 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:49.955217+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 6029312 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:50.955435+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 6029312 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:51.955629+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 6021120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:52.955833+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 6021120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:53.956010+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972186 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 6021120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.978791237s of 14.988065720s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:54.956166+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 6012928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:55.956361+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 6012928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:56.956529+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 6004736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:57.956657+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 6004736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd604c00 session 0x55c5bff86f00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:58.956777+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972054 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 5996544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:47:59.956909+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 5996544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:00.957067+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 5996544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:01.957208+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 5988352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:02.957336+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 5988352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:03.957543+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972054 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 5980160 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:04.957678+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 5980160 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:05.957862+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 5980160 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:06.958053+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 5971968 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:07.958209+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 5971968 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:08.958398+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972054 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 5963776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.720639229s of 14.724000931s, submitted: 1
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:09.958586+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 5963776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:10.958751+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 5963776 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:11.958885+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 5955584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:12.959054+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 5955584 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:13.959206+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973698 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 5947392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:14.959350+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 5939200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:15.959543+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83230720 unmapped: 5939200 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd6c2c00 session 0x55c5bfe261e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:16.959769+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 5931008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:17.959906+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 5931008 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:18.960002+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973107 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 5906432 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:19.960151+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 5898240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:20.960310+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 5898240 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:21.960430+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 5890048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:22.960585+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83279872 unmapped: 5890048 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:23.960753+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973107 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 5881856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:24.960910+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83288064 unmapped: 5881856 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.193523407s of 16.218500137s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:25.961100+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 5865472 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:26.961293+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 5857280 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:27.961414+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 5849088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:28.961559+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973107 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 5849088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:29.961707+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 5849088 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:30.961833+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 5840896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:31.961976+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 5840896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:32.962118+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 5840896 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:33.962304+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974619 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 5832704 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:34.962453+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 5824512 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:35.962681+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.827674866s of 10.838430405s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 5816320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:36.962848+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 5816320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:37.962991+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 5816320 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:38.963116+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974028 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 5808128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:39.963239+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 5808128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:40.963369+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 5799936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:41.963556+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 5791744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:42.963671+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 5783552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:43.963806+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974028 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 5783552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:44.963984+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 5783552 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:45.964172+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 5775360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:46.964421+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 5775360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:47.964587+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 5775360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:48.964742+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973896 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 5758976 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:49.964879+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 5758976 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:50.965034+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 5750784 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:51.965176+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 5750784 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:52.965314+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 5742592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:53.965478+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973896 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 5742592 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:54.965649+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 5734400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:55.965754+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 5734400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:56.965866+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 5734400 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:57.966001+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 5726208 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:58.966168+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973896 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 5718016 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:48:59.966349+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 5709824 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:00.966481+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 5709824 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:01.966628+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 5709824 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:02.966758+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 5701632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:03.966897+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973896 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 5701632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:04.967045+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5c0132960
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 5701632 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:05.967285+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 5693440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:06.967588+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5bf51b2c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bf1d45a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 5693440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:07.967773+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 5685248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:08.967887+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973896 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 5685248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:09.968025+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 5677056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:10.968149+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 5677056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:11.968265+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 5677056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:12.968384+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 5668864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:13.968502+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973896 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 5668864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:14.968627+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 5660672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:15.968754+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.379703522s of 39.482639313s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 5660672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:16.968896+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 5660672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:17.969047+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 5652480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:18.969176+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974160 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 5652480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:19.969328+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 5644288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:20.969495+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 5644288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:21.969628+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 5636096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:22.969757+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 5627904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:23.969881+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977184 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 5595136 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:24.970015+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 5595136 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:25.970176+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 5586944 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:26.970490+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.475863457s of 11.538871765s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 5586944 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:27.970637+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 5586944 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:28.970790+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976593 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 5578752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:29.970984+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 5578752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:30.971151+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 5570560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:31.971326+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 5570560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:32.971562+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 5570560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:33.971775+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976461 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 5562368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:34.972035+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 5562368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:35.972253+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 5554176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:36.972466+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 5554176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:37.972609+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 5554176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:38.972763+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976329 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 5545984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:39.972903+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 5545984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:40.973074+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 5537792 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:41.973320+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 5537792 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:42.973473+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 5537792 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:43.973649+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf5000 session 0x55c5bff8ef00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314400 session 0x55c5bd8e3860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976329 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 5529600 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:44.973805+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 5529600 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:45.973937+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 5521408 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:46.974096+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 5521408 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:47.974273+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 5513216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:48.974476+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976329 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 5513216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:49.974644+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 5513216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:50.974817+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 5513216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:51.974980+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8415 writes, 33K keys, 8415 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8415 writes, 1917 syncs, 4.39 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8415 writes, 33K keys, 8415 commit groups, 1.0 writes per commit group, ingest: 21.16 MB, 0.04 MB/s
                                           Interval WAL: 8415 writes, 1917 syncs, 4.39 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 5439488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:52.975163+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 5439488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:53.975337+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976329 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 5431296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:54.975499+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.708448410s of 27.728773117s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 5431296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:55.975654+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 5423104 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:56.975859+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 5423104 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:57.976048+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 5423104 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:58.976229+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976461 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 5414912 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:49:59.976364+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 5414912 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:00.976508+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 5406720 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:01.976691+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 5406720 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:02.976825+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 5390336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:03.976986+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976791 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 5390336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:04.977170+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 5390336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:05.977369+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 5382144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:06.977562+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 5382144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:07.977694+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 5382144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:08.977837+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.177832603s of 14.192712784s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976659 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 5373952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:09.977964+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 5373952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:10.978099+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 5365760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:11.978282+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 5365760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:12.978426+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5357568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:13.978632+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976659 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5357568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:14.978769+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 5357568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:15.978897+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 5349376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:16.979048+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 5349376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:17.979274+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 5349376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:18.979434+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c04d4400 session 0x55c5bf1b72c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf5400 session 0x55c5bf1b6b40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976659 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 5341184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:19.979565+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 5341184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:20.979774+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 5332992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:21.979959+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 5332992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:22.980112+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 5324800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:23.980255+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976659 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 5316608 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:24.980436+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 5316608 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:25.980610+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 5308416 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:26.980818+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 5308416 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:27.980953+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 5308416 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:28.981116+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976659 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5292032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:29.981296+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd6c2c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.578565598s of 20.583370209s, submitted: 1
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 5292032 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:30.981501+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 5283840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:31.981626+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 5283840 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:32.981765+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 5275648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:33.981955+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978303 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:34.982124+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 5275648 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:35.982257+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 5267456 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:36.982408+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 5259264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:37.982540+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 5259264 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:38.982655+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 5251072 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979224 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:39.982922+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 5251072 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:40.983093+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 5242880 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:41.983242+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 5242880 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:42.983371+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 5234688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:43.983516+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 5234688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979224 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:44.983708+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 5234688 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:45.983885+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 5226496 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.937955856s of 15.951717377s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:46.984072+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 5226496 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:47.984421+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 5226496 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:48.984585+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 5210112 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979092 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:49.984729+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 5210112 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:50.984938+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 5201920 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:51.985103+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83968000 unmapped: 5201920 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:52.985301+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 5185536 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:53.985466+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 5177344 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979092 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:54.985601+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 5169152 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:55.985730+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 5160960 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:56.985922+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 5160960 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:57.986156+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5152768 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:58.986255+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 5152768 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979092 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:50:59.986439+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5144576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:00.986576+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5144576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:01.986710+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 5144576 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff885a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf5800 session 0x55c5bfd72780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:02.986834+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5136384 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:03.986970+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 5136384 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979092 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:04.987328+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5128192 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:05.987451+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5128192 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5bff87e00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd6c2c00 session 0x55c5bd8dc1e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:06.987741+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 5128192 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:07.987877+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5120000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:08.988015+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 5120000 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:09.988145+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979092 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5111808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:10.988332+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5111808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:11.988540+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 5111808 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.733755112s of 26.795358658s, submitted: 1
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:12.988678+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5103616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:13.988860+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 5103616 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:14.988995+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979224 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5087232 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:15.989139+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 5087232 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:16.989321+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 5079040 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:17.989475+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 5079040 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:18.989585+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 5062656 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:19.989792+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980868 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 5062656 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:20.989949+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 5054464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:21.990140+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 5054464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:22.990257+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 5054464 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:23.990382+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 5046272 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:24.990535+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980277 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 5046272 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:25.990685+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 5038080 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:26.990864+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 5038080 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:27.991024+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 5038080 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.588219643s of 15.602917671s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:28.991189+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 5029888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:29.991392+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980145 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 5029888 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:30.991524+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 5021696 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:31.991641+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 5021696 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:32.991813+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 5013504 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:33.991972+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 5013504 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:34.992108+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 5013504 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:35.992253+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 5005312 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:37.032031+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 5005312 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:38.032277+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4997120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:39.033115+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4997120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:40.033951+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 4997120 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:41.034137+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4988928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:42.072082+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 4988928 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:43.072404+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 4980736 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:44.072534+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4972544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:45.072811+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 4972544 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:46.072976+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4964352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:47.073217+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 4964352 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:48.073444+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 4956160 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.056434631s of 20.467685699s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:49.073587+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 4923392 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:50.073755+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:51.073921+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:52.074132+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:53.074312+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:54.074512+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:55.074686+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:56.074902+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:57.075166+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:58.075387+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:51:59.075556+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:00.075755+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:01.076757+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:02.076897+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:03.077103+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:04.079231+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:05.081088+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c048ab40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314400 session 0x55c5c048a960
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:06.081526+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:07.081693+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:08.081876+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:09.082005+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:10.082142+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:11.082263+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:12.082379+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:13.082506+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:14.082639+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:15.082867+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:16.082995+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.560863495s of 27.214382172s, submitted: 213
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 4784128 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:17.083425+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4775936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:18.083556+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4775936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:19.083797+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4775936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:20.357378+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979554 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4775936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:21.357606+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 4775936 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:22.357748+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 4767744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:23.358172+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 4767744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:24.358360+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 4767744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:25.358578+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979554 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 4767744 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.941340446s of 10.002558708s, submitted: 19
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:26.358685+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 4751360 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [1])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:27.358866+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4669440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:28.359113+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4669440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:29.359296+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4669440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:30.359528+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979554 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4669440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:31.359691+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4669440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:32.359841+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 4669440 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:33.360057+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:34.360190+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:35.360314+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:36.360447+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:37.360654+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:38.360789+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:39.360943+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:40.361114+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:41.361285+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:42.361450+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:43.361684+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:44.361809+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:45.361959+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:46.362152+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:47.362382+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:48.362517+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:49.362668+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:50.362832+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:51.363063+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:52.363279+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:53.363487+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:54.363650+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:55.363834+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:56.364076+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:57.364311+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:58.364522+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:52:59.364707+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:00.365361+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:01.365527+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:02.365843+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:03.366120+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:04.366298+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:05.366456+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:06.366623+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:07.366843+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:08.366988+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:09.367159+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:10.367267+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:11.367526+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:12.367728+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:13.367930+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:14.368168+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:15.368386+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:16.368573+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:17.368817+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:18.369003+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:19.369262+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:20.369428+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:21.369835+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:22.370046+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:23.370225+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:24.370432+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:25.370618+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:26.370837+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:27.371046+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:28.371284+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:29.371420+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:30.371675+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:31.371842+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:32.371992+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 4653056 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:33.372293+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:34.372425+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:35.372540+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf5400 session 0x55c5c049e3c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:36.372663+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:37.374768+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c09992c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:38.374939+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:39.375144+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:40.375262+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:41.375518+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:42.375701+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:43.375916+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:44.376154+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:45.376337+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979422 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:46.376474+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:47.376665+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 80.949409485s of 81.228446960s, submitted: 101
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:48.376898+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd6c2c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:49.377512+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:50.377708+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981198 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:51.377822+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:52.377970+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:53.378170+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 4661248 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:54.378373+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:55.378494+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982710 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:56.378661+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:57.378936+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:58.379068+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:53:59.379179+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:00.379312+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982710 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:01.379488+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.826131821s of 14.979765892s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:02.379660+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:03.379809+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:04.379981+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:05.380113+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982578 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:06.380251+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:07.380427+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:08.380553+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:09.380667+0000)
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26059 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:10.380812+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:11.380988+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:12.381212+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:13.381376+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:14.381550+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:15.381759+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:16.381959+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:17.382166+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:18.382342+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:19.382516+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:20.382690+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:21.382882+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:22.383012+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:23.383142+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:24.383260+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:25.383384+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:26.383528+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:27.383685+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:28.383811+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:29.383931+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:30.384080+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:31.384221+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:32.384329+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:33.384488+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:34.384884+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:35.385009+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:36.385261+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:37.385454+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:38.385597+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:39.385734+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:40.386454+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:41.386643+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:42.386836+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:43.387042+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 4644864 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:44.387176+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4636672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:45.387376+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4636672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:46.387597+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd6c2c00 session 0x55c5c049ef00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 4636672 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:47.387817+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 4628480 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:48.388033+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:49.388188+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:50.388408+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:51.388625+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:52.388828+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:53.389002+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:54.389167+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:55.389423+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982446 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:56.389558+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:57.389745+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.568088531s of 55.610332489s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:58.389926+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:54:59.390174+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:00.390358+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982578 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:01.390525+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:02.390662+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:03.390882+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:04.391067+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:05.391367+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984090 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:06.391600+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:07.391842+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.119104385s of 10.127031326s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:08.392024+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:09.392164+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:10.392262+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982908 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:11.392386+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:12.392529+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:13.392671+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:14.392833+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:15.392973+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982776 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:16.393155+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:17.393397+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:18.393595+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:19.393726+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:20.393858+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982776 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:21.393994+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:22.394130+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:23.394295+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:24.394439+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:25.394571+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982776 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:26.394698+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:27.394870+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:28.394991+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:29.395136+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:30.395265+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982776 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:31.395415+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 4620288 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:32.395557+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:33.395691+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:34.395846+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:35.396002+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c049f4a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982776 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:36.396222+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:37.396423+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:38.396623+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:39.396807+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:40.397044+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982776 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:41.397153+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:42.397263+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:43.397400+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:44.397541+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:45.397670+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982776 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:46.397806+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 4612096 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.027221680s of 38.357192993s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:47.397948+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:48.398129+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:49.398309+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:50.398470+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982908 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:51.398600+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:52.398750+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:53.398893+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:54.399046+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:55.399242+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982317 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:56.399357+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:57.399513+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:58.399631+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:55:59.399905+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:00.400028+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982317 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:01.400148+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5c049e3c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314400 session 0x55c5bff8f860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:02.400256+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 4603904 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bd8dc5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5bff8e1e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.504093170s of 16.510330200s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:03.400416+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:04.400640+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:05.400850+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982185 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:06.401037+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:07.401236+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:08.402280+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:09.402407+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:10.402545+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982185 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:11.402675+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:12.402829+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd6c2c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.315752029s of 10.329577446s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:13.403002+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:14.403171+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:15.403267+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 4595712 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983961 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:16.403411+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:17.403567+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:18.403726+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:19.403874+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:20.404014+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983961 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:21.404189+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:22.404357+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:23.404524+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:24.404667+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:25.404839+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983961 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:26.404994+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.076426506s of 14.082677841s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:27.405156+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:28.405321+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:29.405450+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:30.405590+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983697 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:31.405748+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:32.405882+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:33.406011+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:34.406266+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:35.406423+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983697 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:36.406559+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:37.406723+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5c0aa7a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067f400 session 0x55c5c049fc20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:38.407053+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:39.407271+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:40.407414+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983697 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:41.407576+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:42.407723+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:43.407855+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:44.407996+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:45.408157+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983697 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:46.408333+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:47.408544+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:48.408678+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.580358505s of 21.588951111s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:49.408813+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:50.408994+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983829 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:51.409166+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:52.409371+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 3530752 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:53.409586+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:54.409742+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:55.409900+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985341 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:56.410038+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:57.410292+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:58.410531+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 3522560 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:56:59.410694+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:00.410840+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.222995758s of 12.230422974s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:01.411061+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984750 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:02.411286+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:03.411420+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:04.411590+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 3514368 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:05.411709+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:06.411844+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984750 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:07.412019+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 3506176 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:08.412154+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:09.412284+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:10.412474+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:11.412688+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:12.412865+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:13.413001+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:14.413128+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:15.413255+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:16.413386+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:17.413544+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:18.413684+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:19.413810+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:20.413972+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:21.414167+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:22.414344+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd6c2c00 session 0x55c5bf1b63c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:23.414499+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:24.414633+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:25.414768+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:26.414912+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:27.415074+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:28.415226+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:29.415367+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:30.415498+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:31.415739+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:32.415905+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:33.416168+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.331504822s of 32.434001923s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:34.416310+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:35.416428+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:36.416595+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984750 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:37.416778+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:38.416964+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:39.417131+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c0ab41e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c0132000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:40.417317+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:41.417582+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:42.417715+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:43.417859+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:44.418037+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:45.418250+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:46.418418+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:47.418596+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.164649963s of 14.661804199s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:48.418733+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:49.418922+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:50.419051+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:51.419227+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987642 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:52.419374+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5bff8f860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5c0abaf00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:53.419508+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:54.419650+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:55.419792+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:56.419956+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:57.420126+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:58.420263+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:59.420420+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:00.420549+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3440640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:01.420903+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3440640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:02.421035+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3440640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:03.421156+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.274815559s of 15.399731636s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:04.421272+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:05.421390+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:06.421562+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:07.421856+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:08.422064+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:09.422215+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:10.422343+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:11.422478+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988695 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:12.422613+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:13.422781+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:14.422904+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:15.423057+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:16.423225+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988695 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:17.423683+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:18.423815+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.839399338s of 14.924924850s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3424256 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:19.423955+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:20.424107+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:21.424251+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:22.424435+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:23.424575+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:24.424707+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:25.424839+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:26.424990+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:27.425167+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:28.425317+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:29.425485+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:30.425605+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:31.425808+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:32.426045+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:33.426217+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:34.426338+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:35.426534+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:36.426745+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:37.426983+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:38.427151+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:39.427336+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:40.427490+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:41.427656+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:42.427832+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:43.428001+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:44.428227+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:45.428572+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:46.428735+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:47.429266+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:48.429453+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:49.429575+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c06c0400 session 0x55c5bd708d20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0ab4d20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:50.429785+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:51.429939+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:52.430171+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:53.430370+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:54.430586+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:55.430727+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:56.430864+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:57.431080+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:58.431272+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:59.431515+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:00.431665+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.002922058s of 42.007659912s, submitted: 1
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:01.431849+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988695 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:02.432055+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:03.432237+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:04.432462+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:05.432670+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:06.432817+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990207 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:07.432993+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:08.433131+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:09.433447+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:10.433582+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:11.433718+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990207 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:12.433858+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.097983360s of 12.105645180s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:13.434033+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:14.434256+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:15.434393+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:16.434526+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989616 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:17.434721+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:18.434899+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:19.435103+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:20.435270+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:21.435409+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:22.435548+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:23.435694+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 podman[279933]: 2026-01-26 10:19:40.1629933 +0000 UTC m=+0.093718828 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:24.435900+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:25.436128+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:26.436301+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:27.436503+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:28.436656+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:29.436786+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:30.436921+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:31.437071+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:32.437224+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:33.437387+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:34.437517+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:35.437655+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:36.437785+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:37.437965+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:38.438115+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:39.438301+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:40.438474+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:41.438612+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:42.438902+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:43.439038+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:44.439180+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:45.439367+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:46.439509+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:47.440913+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:48.441038+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5bd825c20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5bff8c960
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:49.441159+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:50.441405+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:51.441539+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9188 writes, 34K keys, 9188 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9188 writes, 2303 syncs, 3.99 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 773 writes, 1164 keys, 773 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 773 writes, 386 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3350528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:52.441761+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3350528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:53.441984+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3350528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:54.442175+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:55.442399+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:56.442535+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:57.442709+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:58.442851+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.497116089s of 46.570705414s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:59.442984+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:00.443093+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:01.443262+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989616 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:02.443403+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:03.443545+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:04.443684+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:05.443823+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:06.443954+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991128 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:07.444187+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:08.444463+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:09.444768+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:10.444927+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:11.445047+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990537 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:12.445169+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:13.445436+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:14.445586+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:15.445753+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.288171768s of 16.321334839s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000043s
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:16.445896+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:17.446069+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:18.446289+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:19.446439+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:20.446564+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:21.446702+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:22.446837+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:23.447056+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:24.447221+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:25.447352+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:26.447493+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:27.447640+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:28.447794+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:29.447927+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:30.448153+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:31.448392+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:32.448530+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:33.448722+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:34.448960+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:35.449181+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:36.449424+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:37.449603+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:38.449761+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:39.449896+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:40.450017+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:41.450148+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bd799a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:42.450322+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:43.450446+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:44.450582+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:45.450727+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:46.450873+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:47.451064+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:48.451400+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:49.451625+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:50.451751+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:51.451886+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:52.452050+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.008251190s of 37.012298584s, submitted: 1
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:53.452228+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:54.452403+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:55.452574+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:56.452737+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993561 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:57.452907+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5c0abb4a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314400 session 0x55c5c0aa74a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:58.453043+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:59.453131+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:00.453252+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:01.453388+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993561 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:02.453559+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:03.453737+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:04.453915+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:05.454038+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:06.454410+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993561 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:07.454596+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:08.454748+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.823002815s of 16.056192398s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:09.454877+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:10.455033+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:11.455253+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995073 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:12.455481+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:13.455855+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:14.456109+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:15.456276+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:16.456527+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995073 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:17.456846+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:18.457053+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:19.457217+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:20.457512+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.591454506s of 12.605092049s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:21.457649+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:22.770733+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:23.770851+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:24.771186+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:25.771438+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:26.771870+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:27.772446+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:28.772648+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:29.772836+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:30.773031+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:31.773264+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:32.773419+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:33.773601+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:34.773984+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:35.774147+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:36.774289+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:37.774494+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:38.774676+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:39.774936+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:40.775087+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:41.775231+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:42.775371+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:43.775564+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:44.775735+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:45.775938+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:46.776168+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:47.776457+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:48.776631+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.426548004s of 27.788986206s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:49.776810+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 4276224 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:50.776961+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:51.777103+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:52.777274+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:53.777476+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:54.777652+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:55.777830+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:56.778032+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:57.778284+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:58.778389+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:59.778574+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:00.778747+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:01.778886+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:02.779003+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:03.779137+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:04.779305+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:05.779484+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:06.779635+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:07.779829+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:08.779955+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:09.780097+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:10.780291+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:11.780491+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:12.780701+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:13.780864+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:14.781007+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:15.781168+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:16.781380+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:17.781553+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:18.781710+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:19.781844+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:20.782020+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:21.782254+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:22.782451+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:23.782637+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:24.782773+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:25.782940+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.075836182s of 37.604110718s, submitted: 189
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:26.783088+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4243456 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993846 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:27.783231+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4243456 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:28.783397+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:29.783519+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:30.783607+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:31.783709+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:32.783851+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:33.783976+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:34.784112+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:35.784284+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:36.784516+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:37.784717+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:38.784851+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:39.785046+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:40.785250+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:41.785394+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:42.785525+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:43.785665+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:44.785804+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:45.785932+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:46.786065+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:47.786265+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:48.786398+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:49.786531+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:50.786675+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:51.786811+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:52.786961+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:53.787102+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:54.787324+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:55.787488+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:56.787663+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:57.787869+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:58.788059+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:59.788215+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:00.788357+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:01.788543+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:02.788707+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:03.788865+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:04.789024+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:05.789166+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:06.789303+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c049e3c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bfd73c20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:07.789531+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:08.789668+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:09.789817+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:10.789972+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:11.790108+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:12.790251+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:13.790422+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:14.790596+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:15.790723+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:16.790904+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:17.791071+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.814117432s of 51.125740051s, submitted: 118
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993891 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:18.791263+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5c06ded20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5c048b0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:19.791383+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:20.791526+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:21.791711+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:22.791856+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993891 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:23.792004+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:24.792160+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:25.792316+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:26.792588+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:27.792807+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995403 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:28.792955+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:29.793102+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.486224174s of 12.492043495s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:30.793312+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:31.793445+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:32.793586+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995403 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:33.793717+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:34.793883+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:35.794005+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c049f2c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c06cd4a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:36.794147+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:37.794363+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994812 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:38.794528+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:39.794685+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:40.794866+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:41.795017+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:42.795179+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994812 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:43.795369+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.910851479s of 13.921665192s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:44.795508+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:45.795641+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:46.795777+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:47.795941+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994812 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:48.796067+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:49.796462+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:50.796781+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:51.797066+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:52.797339+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:53.797515+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:54.797630+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:55.797801+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:56.797965+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:57.798172+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:58.798354+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:59.798520+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:00.798948+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:01.799301+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:02.799597+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:03.799742+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.066726685s of 20.080118179s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:04.799954+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:05.800248+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:06.800460+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:07.800709+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:08.800957+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:09.801131+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:10.801334+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:11.801567+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:12.801827+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:13.802009+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:14.802298+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:15.802510+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:16.802753+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:17.802988+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:18.803140+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:19.803395+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:20.803640+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:21.803789+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:22.804589+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:23.804763+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:24.804968+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:25.805443+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:26.805784+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:27.806484+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:28.806939+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:29.807244+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:30.807399+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:31.807652+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:32.807817+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:33.807960+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:34.808141+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:35.808355+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:36.808601+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:37.809104+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:38.809345+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:39.809545+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:40.809812+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:41.809987+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:42.810124+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:43.810470+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:44.810607+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:45.810834+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:46.810971+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:47.811264+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:48.811730+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:49.811860+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:50.811990+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bf1de780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:51.812115+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:52.812278+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26435 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:53.812453+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:54.812615+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:55.812762+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff8b0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:56.813167+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:57.813415+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:58.813636+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:59.813920+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:00.814379+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:01.814740+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.070049286s of 58.074813843s, submitted: 1
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:02.814940+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:03.815147+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:04.815393+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:05.815664+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:06.815879+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:07.816098+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997377 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:08.816345+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:09.816509+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:10.816700+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:11.816906+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.218690872s of 10.236251831s, submitted: 5
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:12.817079+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002064 data_alloc: 218103808 data_used: 167936
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:13.817272+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _renew_subs
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:14.817489+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 20176896 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc656000/0x0/0x4ffc00000, data 0xf735e/0x1b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _renew_subs
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 146 ms_handle_reset con 0x55c5c06c1000 session 0x55c5c049f0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:15.817736+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 20168704 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbe52000/0x0/0x4ffc00000, data 0x8f946b/0x9b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:16.817896+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 20135936 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 147 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5c06cd680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:17.818166+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 20127744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099850 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:18.818476+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 20127744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:19.818761+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 20127744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:20.819299+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 20119552 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9dc000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:21.819522+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 20119552 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9dc000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:22.819684+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:23.819869+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:24.820033+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:25.820226+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:26.820416+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:27.820670+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:28.821072+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:29.821413+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:30.821607+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:31.821907+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:32.822130+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:33.822486+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:34.822769+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:35.823069+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:36.823383+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:37.823674+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:38.823840+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5bd314400 session 0x55c5c0abaf00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0ab5c20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:39.824116+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:40.824302+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:41.824489+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:42.824654+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:43.824936+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:44.825084+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:45.825255+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:46.825412+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:47.825593+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:48.825725+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:49.825873+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.039752960s of 37.658199310s, submitted: 45
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:50.826001+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:51.826156+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:52.826533+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098114 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:53.826682+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:54.826831+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:55.826986+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:56.827150+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:57.827357+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099626 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:58.827509+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:59.827847+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:00.828110+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:01.828470+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c0ab4d20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c06c1800 session 0x55c5c0ab41e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5bd314400 session 0x55c5bd708d20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.439785004s of 12.535116196s, submitted: 3
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:02.828776+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 20103168 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098312 data_alloc: 218103808 data_used: 172032
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:03.828915+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bf7165a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 12345344 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c067f400 session 0x55c5c06df0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:04.829248+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 12345344 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:05.829390+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 148 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 12328960 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:06.829590+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 12320768 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:07.829798+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c06cda40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 12083200 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5c06c1c00 session 0x55c5c0aa65a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5bd314400 session 0x55c5bff8b0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176555 data_alloc: 218103808 data_used: 6991872
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bfe27e00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5c067f400 session 0x55c5bf1d45a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:08.829991+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:09.830145+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:10.830307+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x13147e3/0x13d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:11.830421+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:12.830703+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _renew_subs
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.216803551s of 10.641160011s, submitted: 41
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179129 data_alloc: 218103808 data_used: 6991872
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:13.831013+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bff8d860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bf1bc5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:14.831121+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:15.831305+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:16.831489+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:17.831690+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218497 data_alloc: 234881024 data_used: 12800000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:18.831826+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:19.831956+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:20.832150+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:21.832353+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:22.832512+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218497 data_alloc: 234881024 data_used: 12800000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:23.832664+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:24.832803+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.587004662s of 11.596732140s, submitted: 10
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [1])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 2899968 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:25.832952+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 103112704 unmapped: 3891200 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:26.833094+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4faf22000/0x0/0x4ffc00000, data 0x18247b5/0x18ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 3825664 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:27.833262+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 2703360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261579 data_alloc: 234881024 data_used: 13148160
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:28.833397+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bd7f7c20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d4400 session 0x55c5bcb6eb40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 2703360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:29.833578+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 2785280 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:30.833711+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 2785280 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:31.833846+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 2785280 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:32.834015+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262051 data_alloc: 234881024 data_used: 13152256
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:33.834160+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:34.834331+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:35.834466+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:36.834616+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:37.834901+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262051 data_alloc: 234881024 data_used: 13152256
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:38.835109+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:39.835311+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.887156487s of 15.051704407s, submitted: 64
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:40.835575+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:41.835815+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:42.835985+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261883 data_alloc: 234881024 data_used: 13152256
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:43.836124+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:44.836434+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:45.836673+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 1744896 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:46.836982+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 1720320 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bf1ce780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1c00 session 0x55c5bf1b6000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bff8e000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:47.837186+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 1720320 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bff8c5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264907 data_alloc: 234881024 data_used: 13676544
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:48.837433+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106332160 unmapped: 671744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d4400 session 0x55c5bff8da40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0c00 session 0x55c5bd8dfc20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:49.837581+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c049ed20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1c00 session 0x55c5bfd43a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bd78bc20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0c00 session 0x55c5bd8252c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 8249344 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:50.837786+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 8249344 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:51.837995+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e5000/0x0/0x4ffc00000, data 0x1eb07c5/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.295121193s of 12.392032623s, submitted: 28
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c087f0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:52.838136+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321487 data_alloc: 234881024 data_used: 13676544
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:53.838280+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bfd730e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:54.838519+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e5000/0x0/0x4ffc00000, data 0x1eb07c5/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bd7985a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bfe27a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:55.838655+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 8282112 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:56.838778+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:57.839056+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 1966080 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369746 data_alloc: 234881024 data_used: 20332544
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:58.839211+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bf7165a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bfd42960
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 1966080 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:59.839340+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 1933312 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:00.839507+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 1933312 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:01.839642+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 1933312 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:02.839762+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 1900544 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:03.839887+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369746 data_alloc: 234881024 data_used: 20332544
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 1867776 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:04.840060+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:05.840226+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:06.840391+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:07.840554+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.207042694s of 16.260372162s, submitted: 8
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:08.840707+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382754 data_alloc: 234881024 data_used: 20353024
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 2351104 heap: 117637120 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:09.840862+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 4595712 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:10.841038+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f877e000/0x0/0x4ffc00000, data 0x2a157f8/0x2ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 4595712 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:11.841230+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 3383296 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:12.841399+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 3383296 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:13.841569+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464462 data_alloc: 234881024 data_used: 21639168
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 3383296 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:14.841733+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 3375104 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:15.841946+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 3375104 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:16.842179+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 3375104 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:17.842426+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 3366912 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:18.842552+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464478 data_alloc: 234881024 data_used: 21639168
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.643788338s of 10.466860771s, submitted: 84
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 3350528 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:19.842684+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 3350528 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:20.842840+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 3317760 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:21.842999+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 3317760 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5c048ab40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0c00 session 0x55c5bd8dc5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:22.843137+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd825e00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5c00683c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 3309568 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:23.843342+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0800 session 0x55c5bd825680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bf1c2f00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278023 data_alloc: 234881024 data_used: 13680640
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bff8c3c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:24.843607+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:25.843817+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f996b000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:26.844257+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f996b000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:27.844460+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:28.844610+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277263 data_alloc: 234881024 data_used: 13676544
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f996b000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:29.844743+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:30.844903+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.404096603s of 12.254305840s, submitted: 41
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c049f2c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:31.845058+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:32.845250+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf51ab40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:33.845437+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152333 data_alloc: 218103808 data_used: 7512064
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:34.845572+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:35.845725+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:36.845879+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:37.846061+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:38.846259+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153845 data_alloc: 218103808 data_used: 7512064
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:39.846422+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:40.846815+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:41.846967+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:42.847268+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:43.847466+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153845 data_alloc: 218103808 data_used: 7512064
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:44.847613+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:45.847809+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:46.847992+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:47.848271+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:48.848449+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153845 data_alloc: 218103808 data_used: 7512064
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:49.848638+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.363817215s of 18.559324265s, submitted: 22
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:50.848859+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:51.849020+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:52.849251+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:53.849498+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153289 data_alloc: 218103808 data_used: 7512064
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:54.849738+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:55.849957+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:56.850107+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:57.850280+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:58.850448+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153289 data_alloc: 218103808 data_used: 7512064
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:59.850694+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:00.850876+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:01.851074+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:02.851253+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bd78a1e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 11755520 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.769110680s of 13.776129723s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:03.851415+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bfd72b40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196133 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:04.851575+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f6a000/0x0/0x4ffc00000, data 0x122e743/0x12f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:05.851725+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5c06df860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:06.851878+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f6a000/0x0/0x4ffc00000, data 0x122e743/0x12f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:07.852085+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd7983c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:08.852266+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0aba1e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197947 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5c048b2c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:09.852563+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:10.852777+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:11.852933+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f69000/0x0/0x4ffc00000, data 0x122e753/0x12f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:12.853084+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:13.853244+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228803 data_alloc: 234881024 data_used: 11476992
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:14.853395+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:15.853778+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:16.854381+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:17.854627+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f69000/0x0/0x4ffc00000, data 0x122e753/0x12f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:18.854823+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228803 data_alloc: 234881024 data_used: 11476992
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:19.854992+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f69000/0x0/0x4ffc00000, data 0x122e753/0x12f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 13737984 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:20.855265+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 13737984 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:21.855444+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 13737984 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:22.855683+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.122682571s of 19.164644241s, submitted: 15
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 9748480 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:23.855856+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299977 data_alloc: 234881024 data_used: 11915264
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:24.856007+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 11649024 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:25.856255+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 11649024 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:26.856411+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 11649024 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:27.856597+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 11640832 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:28.856782+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 11640832 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305777 data_alloc: 234881024 data_used: 12337152
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:29.856922+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:30.857107+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:31.857273+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:32.857417+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bc5753/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:33.857560+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304817 data_alloc: 234881024 data_used: 12337152
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:34.857715+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:35.857872+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:36.858041+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.739342690s of 14.036973953s, submitted: 112
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:37.858258+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95cc000/0x0/0x4ffc00000, data 0x1bcb753/0x1c90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:38.858391+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304561 data_alloc: 234881024 data_used: 12337152
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:39.858534+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:40.858678+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:41.858837+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95cc000/0x0/0x4ffc00000, data 0x1bcb753/0x1c90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95cc000/0x0/0x4ffc00000, data 0x1bcb753/0x1c90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:42.858990+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:43.859127+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304649 data_alloc: 234881024 data_used: 12337152
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:44.859300+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95c9000/0x0/0x4ffc00000, data 0x1bce753/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:45.859416+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:46.859574+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:47.859740+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.978490829s of 11.007252693s, submitted: 4
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:48.859876+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306025 data_alloc: 234881024 data_used: 12353536
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:49.860021+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95bd000/0x0/0x4ffc00000, data 0x1bda753/0x1c9f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:50.860145+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:51.860283+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:52.860400+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bff8ba40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c0ab4780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c09990e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:53.860513+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:54.860644+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:55.860773+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:56.860895+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:57.861062+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:58.861258+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:59.861389+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:00.861521+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:01.861649+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:02.861793+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:03.862002+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:04.862125+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:05.862265+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:06.862455+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:07.862745+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:08.862881+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:09.863007+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:10.863145+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:11.863275+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:12.863408+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:13.863542+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:14.863679+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:15.863794+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:16.863949+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:17.864074+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.849651337s of 30.339509964s, submitted: 14
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5c06cc000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:18.864182+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173382 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:19.864268+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:20.864353+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa35b000/0x0/0x4ffc00000, data 0xe3d743/0xf01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:21.864455+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:22.864572+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa35b000/0x0/0x4ffc00000, data 0xe3d743/0xf01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:23.864710+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bff8c1e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173382 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:24.864869+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bff912c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bfd734a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:25.864944+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c06de1e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:26.865080+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:27.865235+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:28.865393+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182408 data_alloc: 218103808 data_used: 7618560
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:29.865547+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:30.865738+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:31.865885+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:32.866038+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:33.866117+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182408 data_alloc: 218103808 data_used: 7618560
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:34.866306+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:35.866458+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:36.866551+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:37.866762+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 14376960 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.720226288s of 20.127946854s, submitted: 8
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0xedf753/0xfa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,9,3])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:38.866912+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 15409152 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240552 data_alloc: 218103808 data_used: 7897088
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:39.867038+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:40.867170+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:41.867318+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1624753/0x16e9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:42.867482+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:43.867680+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249194 data_alloc: 218103808 data_used: 7888896
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:44.867838+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1624753/0x16e9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 15843328 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:45.868014+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:46.868161+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:47.868393+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:48.868542+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b4f000/0x0/0x4ffc00000, data 0x1648753/0x170d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246418 data_alloc: 218103808 data_used: 7888896
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:49.868679+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:50.868823+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.267821312s of 12.903651237s, submitted: 71
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:51.868962+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 3013 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1620 writes, 4804 keys, 1620 commit groups, 1.0 writes per commit group, ingest: 4.82 MB, 0.01 MB/s
                                           Interval WAL: 1620 writes, 710 syncs, 2.28 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:52.869114+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b44000/0x0/0x4ffc00000, data 0x1653753/0x1718000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:53.869309+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246586 data_alloc: 218103808 data_used: 7888896
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:54.869467+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 15933440 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:55.869583+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 15933440 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:56.869720+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bff8f0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 29089792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:57.869909+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9aa4000/0x0/0x4ffc00000, data 0x16f3753/0x17b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29065216 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:58.870120+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29065216 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2306753/0x23cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338864 data_alloc: 218103808 data_used: 7888896
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:59.870277+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29065216 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bfe263c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:00.870400+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2306753/0x23cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5bd709680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107700224 unmapped: 29057024 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:01.870520+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9800 session 0x55c5bfd42000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2306753/0x23cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.908476830s of 11.043089867s, submitted: 14
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd7ebe00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 28835840 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:02.870645+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 28835840 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:03.870771+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 24903680 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413071 data_alloc: 234881024 data_used: 17989632
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:04.870977+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6c000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:05.871167+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:06.871346+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:07.871596+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:08.871805+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434047 data_alloc: 234881024 data_used: 21106688
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:09.872024+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6c000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:10.872221+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 19963904 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:11.872359+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6b000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 19931136 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:12.872482+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6b000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 19922944 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:13.872715+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 19922944 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434519 data_alloc: 234881024 data_used: 21106688
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:14.872930+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.575447083s of 12.622465134s, submitted: 12
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 14589952 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:15.873094+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 14573568 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:16.873278+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:17.873474+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:18.873797+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2875763/0x293b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484743 data_alloc: 234881024 data_used: 21970944
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:19.874097+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:20.874315+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 16523264 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:21.874548+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 15474688 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:22.874748+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff861e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5bff8e5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2875763/0x293b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bd7083c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:23.874949+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258184 data_alloc: 218103808 data_used: 7888896
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:24.875105+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:25.875248+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:26.875536+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:27.875874+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:28.876010+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b41000/0x0/0x4ffc00000, data 0x1656753/0x171b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bf51b2c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bd8250e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258184 data_alloc: 218103808 data_used: 7888896
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.593686104s of 14.806592941s, submitted: 85
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:29.876170+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd7ea5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:30.876354+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:31.876528+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:32.876671+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:33.876795+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:34.876939+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:35.877092+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:36.877253+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:37.877424+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:38.877589+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:39.877713+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:40.877860+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:41.877997+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:42.878167+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:43.878318+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:44.878523+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:45.878661+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:46.878822+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:47.878997+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:48.879295+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:49.879485+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:50.879733+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:51.880255+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:52.880505+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:53.880728+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:54.880906+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:55.881254+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:56.881393+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5be1ae1e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff914a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5c0ab52c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c09981e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.276571274s of 27.300722122s, submitted: 9
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5c087f860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bf1b65a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff8b860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4000 session 0x55c5be1af680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd824000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 32055296 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:57.881762+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 32055296 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:58.881950+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 32047104 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:59.882362+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251310 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a5a000/0x0/0x4ffc00000, data 0x173d753/0x1802000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:00.882598+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:01.882841+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4000 session 0x55c5c01334a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:02.883097+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5c0132d20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:03.883270+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0132b40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c0133c20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a5a000/0x0/0x4ffc00000, data 0x173d753/0x1802000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a5a000/0x0/0x4ffc00000, data 0x173d753/0x1802000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:04.883405+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 32022528 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256506 data_alloc: 218103808 data_used: 6991872
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:05.883586+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 32022528 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:06.883710+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 28573696 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:07.883872+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:08.883991+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:09.884117+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325514 data_alloc: 234881024 data_used: 17256448
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:10.884278+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:11.884407+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:12.884540+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:13.884733+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 28549120 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:14.884916+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 28540928 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325514 data_alloc: 234881024 data_used: 17256448
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:15.885051+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 28540928 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.382997513s of 19.487087250s, submitted: 30
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:16.885249+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 28540928 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:17.885413+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 27484160 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98b1000/0x0/0x4ffc00000, data 0x18e4786/0x19ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:18.885542+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 27271168 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:19.885670+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:20.885789+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:21.885929+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:22.886051+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:23.886178+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:24.886323+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:25.886465+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:26.886619+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:27.886780+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:28.886904+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:29.887066+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 27074560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:30.887253+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 27074560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:31.887438+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:32.887649+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:33.887803+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:34.887955+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:35.888099+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:36.888245+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:37.888416+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:38.888598+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:39.888762+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:40.888897+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:41.889026+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5be470400 session 0x55c5bf1bde00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5be470800 session 0x55c5be0a6000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:42.889203+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:43.889389+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:44.889523+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:45.889687+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:46.889829+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:47.890009+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:48.890170+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.537471771s of 32.575607300s, submitted: 22
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:49.890328+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 27099136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350290 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:50.890528+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:51.890728+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:52.890934+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:53.891167+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:54.891433+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350218 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:55.891648+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:56.891816+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4400 session 0x55c5be0a7860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4800 session 0x55c5bf51a960
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4c00 session 0x55c5c06cdc20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bd7f7860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:57.892057+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5400 session 0x55c5bff883c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 26894336 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4400 session 0x55c5bd7eb2c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4800 session 0x55c5bd8e2f00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4c00 session 0x55c5bfd73860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c06df860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:58.892277+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26877952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:59.892437+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.740381241s of 10.729345322s, submitted: 250
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26877952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355752 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:00.892578+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26877952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:01.892810+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26861568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:02.892958+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:03.893244+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:04.893387+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355752 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:05.893544+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:06.893701+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:07.893876+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:08.894020+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:09.894164+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355084 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:10.894321+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:11.894467+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:12.894639+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:13.894772+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:14.894898+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355084 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd314400 session 0x55c5c087f2c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bd8df860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:15.895061+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:16.895173+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:17.895348+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:18.895507+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:19.895650+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355084 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:20.895792+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:21.895914+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:22.896068+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:23.896185+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:24.896328+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bfdf4000 session 0x55c5bf1d5e00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 26779648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.116369247s of 25.326297760s, submitted: 2
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355400 data_alloc: 234881024 data_used: 17641472
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:25.896615+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 25370624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:26.896736+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bfdf4800 session 0x55c5bf1c3e00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 25370624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:27.896931+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:28.897138+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9610000/0x0/0x4ffc00000, data 0x1b837f8/0x1c4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:29.897283+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380176 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:30.897407+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:31.897536+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 25198592 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:32.897644+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 25182208 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f960a000/0x0/0x4ffc00000, data 0x1b897f8/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:33.897782+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 25182208 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:34.897914+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 25174016 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.028648376s of 10.022297859s, submitted: 115
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381578 data_alloc: 234881024 data_used: 17645568
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:35.898063+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f960a000/0x0/0x4ffc00000, data 0x1b897f8/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:36.898212+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:37.898373+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:38.898501+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9607000/0x0/0x4ffc00000, data 0x1b8c7f8/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:39.898627+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382226 data_alloc: 234881024 data_used: 17645568
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:40.898772+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1c23c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 25133056 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9607000/0x0/0x4ffc00000, data 0x1b8c7f8/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:41.898914+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf717680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:42.899062+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:43.899313+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:44.899439+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357030 data_alloc: 234881024 data_used: 17637376
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:45.899724+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:46.899856+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989e000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:47.900018+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:48.900138+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.461967468s of 14.074635506s, submitted: 82
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd799a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4000 session 0x55c5c048ab40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:49.900262+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bfe27860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:50.900397+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:51.900506+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:52.900633+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:53.901031+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:54.901292+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:55.901458+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:56.901587+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:57.901774+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:58.901909+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:59.903333+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:00.903580+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:01.903748+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:02.903993+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:03.904280+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:04.904625+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:05.904804+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:06.905046+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:07.905267+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:08.905680+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:09.906079+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:10.906418+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:11.906642+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:12.906991+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:13.907265+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:14.907536+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:15.907680+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:16.907957+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:17.908261+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:18.908449+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:19.908601+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:20.908950+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:21.909175+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:22.909368+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:23.909588+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:24.909775+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:25.909936+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:26.910147+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:27.910426+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:28.910546+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:29.910682+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:30.910833+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:31.911043+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:32.911323+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:33.911488+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:34.911620+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bd8dfc20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c06df680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bd8e3a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5c00 session 0x55c5c06df860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 31580160 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.136810303s of 46.268627167s, submitted: 53
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206093 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bfd72780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c048b0e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c06cc780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:35.911745+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bf51a960
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852000 session 0x55c5bd7f63c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x141177c/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 31080448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:36.912017+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 31080448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:37.912345+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x14117b5/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 31080448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:38.912571+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:39.912788+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x14117b5/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254289 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:40.912920+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c0133680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c09990e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:41.913070+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x14117b5/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1de780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bf1ded20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:42.913264+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:43.913412+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 31203328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d84000/0x0/0x4ffc00000, data 0x14117c5/0x14d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:44.913561+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 30875648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294299 data_alloc: 234881024 data_used: 12804096
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:45.913716+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 30875648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:46.913862+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.153660774s of 11.437462807s, submitted: 40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852400 session 0x55c5bd72ab40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bd7ea000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33792000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bff881e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:47.914031+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33792000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:48.914234+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: mgrc ms_handle_reset ms_handle_reset con 0x55c5bd70b800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2891176105
Jan 26 10:19:40 compute-0 ceph-osd[82841]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2891176105,v1:192.168.122.100:6801/2891176105]
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: get_auth_request con 0x55c5c0852000 auth_method 0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: mgrc handle_mgr_configure stats_period=5
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:49.914384+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207764 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:50.914520+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:51.914642+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:52.914770+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:53.914998+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:54.915253+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207764 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:55.915375+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:56.915618+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:57.916310+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:58.916469+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:59.916610+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207764 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:00.916728+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:01.916909+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf7172c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf7165a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bf7170e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf716780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.421150208s of 15.532203674s, submitted: 31
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf716b40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:02.917021+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:03.917160+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e7f000/0x0/0x4ffc00000, data 0x1319743/0x13dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:04.917546+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253808 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:05.917857+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:06.918904+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:07.919251+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e7f000/0x0/0x4ffc00000, data 0x1319743/0x13dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:08.920096+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1c34a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:09.920290+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255932 data_alloc: 218103808 data_used: 6995968
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:10.921138+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:11.921670+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x133d743/0x1401000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bf1c23c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852c00 session 0x55c5bf1d4f00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:12.922076+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.266777992s of 10.367519379s, submitted: 14
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c0133a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:13.922221+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:14.922610+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:15.922758+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:16.922928+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:17.923241+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:18.923416+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:19.923640+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:20.923929+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:21.924181+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:22.924413+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:23.924605+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:24.924826+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:25.925001+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:26.925129+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:27.925334+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:28.925483+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:29.925650+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:30.925809+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:31.925977+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:32.926132+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:33.926392+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:34.926765+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:35.927053+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:36.927284+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:37.927465+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:38.927587+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:39.927753+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:40.927902+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:41.928104+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:42.928268+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bd799a40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c06cd680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 34603008 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5c0132b40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853000 session 0x55c5bd8e2f00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.282125473s of 30.300283432s, submitted: 7
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:43.929293+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd8df4a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf716d20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c0abab40
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bd7f70e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853400 session 0x55c5bfd42000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:44.929415+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:45.929556+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251623 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:46.929724+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:47.929909+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:48.930050+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:49.930337+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:50.930547+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287343 data_alloc: 234881024 data_used: 12296192
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:51.930723+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:52.930914+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:53.931087+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:54.931295+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:55.931462+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287343 data_alloc: 234881024 data_used: 12296192
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:56.931598+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:57.931774+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:58.931928+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:59.932021+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:00.932159+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287343 data_alloc: 234881024 data_used: 12296192
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.577539444s of 18.017475128s, submitted: 25
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:01.932374+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 30187520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:02.932541+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:03.932645+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:04.932807+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:05.932971+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x14e37b4/0x15a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310619 data_alloc: 234881024 data_used: 12427264
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:06.933110+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:07.933260+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:08.933390+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:09.933531+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9cad000/0x0/0x4ffc00000, data 0x14e97b4/0x15af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:10.933685+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310619 data_alloc: 234881024 data_used: 12427264
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:11.933867+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:12.933995+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.607760429s of 12.195022583s, submitted: 33
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c06de3c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:13.934118+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf7172c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9cad000/0x0/0x4ffc00000, data 0x14e97b4/0x15af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:14.934227+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:15.934377+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:16.934485+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:17.934658+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:18.934886+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:19.935015+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:20.935155+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:21.935274+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:22.935420+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:23.935566+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:24.935797+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:25.935965+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:26.936125+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:27.936313+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:28.936449+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:29.936551+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:30.936681+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:31.936887+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:32.937103+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:33.937290+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:34.937411+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:35.937542+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:36.937700+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:37.937911+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.061286926s of 24.648008347s, submitted: 18
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 28909568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1d4000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:38.938051+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:39.938237+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09d000/0x0/0x4ffc00000, data 0x10fb743/0x11bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:40.938450+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248785 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:41.938649+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:42.938796+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:43.938947+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:44.939080+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09d000/0x0/0x4ffc00000, data 0x10fb743/0x11bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09d000/0x0/0x4ffc00000, data 0x10fb743/0x11bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:45.939260+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248785 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5c048a5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:46.939372+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111525888 unmapped: 32579584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09c000/0x0/0x4ffc00000, data 0x10fb766/0x11c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:47.939527+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 32292864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:48.939674+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:49.939802+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:50.940148+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275470 data_alloc: 234881024 data_used: 10690560
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:51.940552+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:52.940632+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09c000/0x0/0x4ffc00000, data 0x10fb766/0x11c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:53.940777+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:54.940960+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:55.941135+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275470 data_alloc: 234881024 data_used: 10690560
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:56.941278+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09c000/0x0/0x4ffc00000, data 0x10fb766/0x11c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:57.941447+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.611251831s of 20.692722321s, submitted: 11
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:58.941577+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e06000/0x0/0x4ffc00000, data 0x1391766/0x1456000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853c00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853c00 session 0x55c5bf1b63c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 26083328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:59.941713+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 26001408 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:00.941854+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366062 data_alloc: 234881024 data_used: 10915840
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 25075712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:01.942010+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 25075712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:02.942174+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 25075712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1b7860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:03.942312+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf1b6000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 25067520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1b70e0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bff91680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:04.942468+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 25051136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:05.942613+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d8400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363284 data_alloc: 234881024 data_used: 11218944
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:06.942764+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:07.942952+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:08.943088+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1a3f776/0x1b05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:09.943264+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1a3f776/0x1b05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:10.943394+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371948 data_alloc: 234881024 data_used: 12521472
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:11.943521+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:12.943652+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1a3f776/0x1b05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:13.943779+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:14.943925+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:15.944064+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372404 data_alloc: 234881024 data_used: 12533760
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.459621429s of 17.793762207s, submitted: 89
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:16.944238+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 24985600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:17.944881+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 24985600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c8e000/0x0/0x4ffc00000, data 0x20f2776/0x21b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:18.945006+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 24379392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:19.945125+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 24371200 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x211f776/0x21e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:20.945266+0000)
Jan 26 10:19:40 compute-0 nova_compute[254880]: 2026-01-26 10:19:40.313 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429228 data_alloc: 234881024 data_used: 12754944
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 24371200 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:21.945421+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 24363008 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:22.945573+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 24363008 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:23.945751+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:24.945912+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:25.946120+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423732 data_alloc: 234881024 data_used: 12754944
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:26.946273+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:27.946448+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:28.946625+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:29.946780+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:30.946955+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423732 data_alloc: 234881024 data_used: 12754944
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.520044327s of 14.694757462s, submitted: 74
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:31.947096+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:32.947257+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:33.947392+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d8400 session 0x55c5bfd732c0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9400 session 0x55c5bd708960
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:34.947513+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:35.947809+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344919 data_alloc: 234881024 data_used: 10915840
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1de780
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:36.947951+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:37.948130+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f94d3000/0x0/0x4ffc00000, data 0x18b4766/0x1979000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:38.948280+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:39.948530+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:40.948682+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344919 data_alloc: 234881024 data_used: 10915840
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f94d3000/0x0/0x4ffc00000, data 0x18b4766/0x1979000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:41.948845+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853800 session 0x55c5be0a7860
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:42.949003+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.497808456s of 12.525653839s, submitted: 10
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:43.949184+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 25976832 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:44.949392+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 25976832 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73766/0xe38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:45.949563+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235519 data_alloc: 218103808 data_used: 6991872
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 25976832 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:46.949750+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c06dfc20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:47.950022+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:48.950173+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:49.950409+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:50.950617+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:51.950854+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:52.951035+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:53.951229+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:54.951408+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:55.951627+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:56.951781+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:57.951934+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:58.952090+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:59.952257+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:00.952405+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:01.952577+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:02.952754+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:03.952921+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:04.953110+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:05.953291+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:06.953508+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:07.953793+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:08.954012+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:09.954321+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.461206436s of 26.923740387s, submitted: 19
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:10.954474+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf716d20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305892 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1bc5a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf1c34a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 26959872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9400 session 0x55c5bf51ad20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853800 session 0x55c5bd7085a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:11.954638+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f97be000/0x0/0x4ffc00000, data 0x15ca743/0x168e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:12.954916+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:13.955071+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f97be000/0x0/0x4ffc00000, data 0x15ca743/0x168e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:14.955225+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f97be000/0x0/0x4ffc00000, data 0x15ca743/0x168e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:15.955364+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bf1d4f00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305892 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26943488 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1d4000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:16.955499+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bfd42000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26943488 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9400 session 0x55c5bd8df4a0
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:17.955670+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853800
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 26787840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:18.955824+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119504896 unmapped: 24600576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:19.956029+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 23486464 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:20.956249+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365676 data_alloc: 234881024 data_used: 15360000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 23486464 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:21.956374+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 23486464 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:22.956499+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 23478272 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:23.956647+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 23478272 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:24.956762+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:25.956873+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365676 data_alloc: 234881024 data_used: 15360000
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:26.956996+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:27.957170+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:28.957257+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.075279236s of 18.855096817s, submitted: 27
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 21561344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:29.957429+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 14663680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:30.957647+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482926 data_alloc: 234881024 data_used: 17350656
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 14172160 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:31.957787+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a71000/0x0/0x4ffc00000, data 0x230f743/0x23d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 16957440 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:32.957921+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 16957440 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:33.958290+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 16957440 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:34.958444+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a76000/0x0/0x4ffc00000, data 0x2312743/0x23d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:35.958579+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478858 data_alloc: 234881024 data_used: 17584128
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:36.958712+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:37.958914+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:38.959048+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:39.959233+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a55000/0x0/0x4ffc00000, data 0x2333743/0x23f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:40.959380+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479178 data_alloc: 234881024 data_used: 17592320
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:41.959557+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:42.959718+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.891769409s of 14.102795601s, submitted: 134
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5c0133680
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853800 session 0x55c5bd8dfc20
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:43.959862+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 126312448 unmapped: 17793024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:44.960083+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a55000/0x0/0x4ffc00000, data 0x2333743/0x23f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 126337024 unmapped: 17768448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:45.960228+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251708 data_alloc: 218103808 data_used: 7098368
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:46.960447+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c0068f00
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:47.960711+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:48.960879+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:49.961035+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:50.961171+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:51.961302+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:52.961439+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:53.961574+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:54.961791+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:55.962001+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:56.962141+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:57.962334+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:58.962544+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:59.962748+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:00.962905+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:01.963098+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:02.963244+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:03.963402+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:04.963580+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:05.963729+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:06.963866+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:07.964040+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:08.964409+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:09.964551+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16821 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:10.964746+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:11.964883+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:12.965026+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:13.965180+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:14.965473+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:15.965697+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:16.965840+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:17.966024+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:18.966187+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:19.966403+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:20.966561+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:21.966735+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:22.966904+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:23.967055+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:24.967219+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:25.967393+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:26.967531+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:27.967772+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:28.967991+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:29.968143+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:30.968290+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:31.968452+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:32.968601+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:33.968776+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:34.968920+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:35.969098+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:36.969242+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:37.969395+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:38.969524+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:39.969656+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:40.969827+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:41.969997+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:42.970115+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:43.970274+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:44.970416+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:45.970577+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:46.970799+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:47.971085+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:48.971287+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:49.971441+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:50.971607+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:51.971799+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:52.971952+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:53.972295+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:54.972453+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:55.972570+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:56.972704+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:57.972892+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:58.973064+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:59.973284+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:00.973445+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:01.973618+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:02.973780+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:03.973922+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:04.974049+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:05.974209+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:19:40 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:19:40 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:06.974335+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 23986176 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'config diff' '{prefix=config diff}'
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'config show' '{prefix=config show}'
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'counter dump' '{prefix=counter dump}'
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'counter schema' '{prefix=counter schema}'
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:07.974493+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 23953408 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:08.974618+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 24313856 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:19:40 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:09.974766+0000)
Jan 26 10:19:40 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 24395776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:19:40 compute-0 ceph-osd[82841]: do_command 'log dump' '{prefix=log dump}'
Jan 26 10:19:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 26 10:19:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589349312' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26080 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1020071750' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.26023 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1248639188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2545354252' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.16785 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1848180643' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/136388260' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.26044 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.26423 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.16803 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1794654331' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2332631685' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2902301858' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1589349312' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:40 compute-0 nova_compute[254880]: 2026-01-26 10:19:40.616 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:40 compute-0 nova_compute[254880]: 2026-01-26 10:19:40.617 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:19:40 compute-0 nova_compute[254880]: 2026-01-26 10:19:40.617 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26453 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 nova_compute[254880]: 2026-01-26 10:19:40.642 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:19:40 compute-0 nova_compute[254880]: 2026-01-26 10:19:40.642 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16839 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 10:19:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900355160' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:40 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26095 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 10:19:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:19:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:41.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:19:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16854 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 26 10:19:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806127034' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26110 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26486 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:41.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16866 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 26 10:19:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814067036' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26504 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26140 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 crontab[280181]: (root) LIST (root)
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.26059 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.26435 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.16821 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.26080 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3949276204' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1634668375' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.26453 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.16839 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1900355160' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.26095 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/461349660' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.26471 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2292204840' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3806127034' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1498955387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1644447967' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:19:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3632884701' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:19:41 compute-0 nova_compute[254880]: 2026-01-26 10:19:41.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:41 compute-0 nova_compute[254880]: 2026-01-26 10:19:41.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16890 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26516 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26173 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16908 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26534 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26194 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 26 10:19:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801966427' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16920 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.16854 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.26110 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.26486 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.16866 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2814067036' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.26504 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.26140 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4142164008' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.16890 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4041690340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1604141855' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1090076573' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/801966427' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/237639938' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:19:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:42 compute-0 nova_compute[254880]: 2026-01-26 10:19:42.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:43 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26218 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 26 10:19:43 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1665467363' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:19:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:43.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:43 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.16932 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26233 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 26 10:19:43 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000606720' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:19:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:43.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:43 compute-0 ceph-mgr[74755]: [dashboard INFO request] [192.168.122.100:44324] [POST] [200] [0.002s] [4.0B] [fc30240a-5f48-42e3-908d-542384c7cfd6] /api/prometheus_receiver
Jan 26 10:19:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 26 10:19:43 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117579945' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26248 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 26 10:19:43 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525525554' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:19:43 compute-0 nova_compute[254880]: 2026-01-26 10:19:43.953 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.26516 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.26173 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.16908 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.26534 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.26194 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.16920 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.26218 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1665467363' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/358209871' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3411450156' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2000606720' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1790643785' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4117579945' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3326537678' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/499056126' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:19:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3776534711' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 26 10:19:44 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086565525' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 nova_compute[254880]: 2026-01-26 10:19:44.187 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 26 10:19:44 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172289278' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 26 10:19:44 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192620793' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 26 10:19:44 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836902024' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:44 compute-0 nova_compute[254880]: 2026-01-26 10:19:44.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 26 10:19:44 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843245282' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.16932 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.26233 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.26248 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/525525554' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3198617286' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3086565525' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2654797244' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4023938435' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2172289278' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1315709763' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3299340595' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2192620793' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4005334717' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1440409755' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2836902024' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3702236953' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:19:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2966148949' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:19:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 26 10:19:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724956292' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:19:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:45.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:45 compute-0 systemd[1]: Starting Hostname Service...
Jan 26 10:19:45 compute-0 nova_compute[254880]: 2026-01-26 10:19:45.315 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:45 compute-0 systemd[1]: Started Hostname Service.
Jan 26 10:19:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 26 10:19:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1745987047' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:19:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 10:19:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3799741745' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:19:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:45.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 26 10:19:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2500859101' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:19:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 26 10:19:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043531508' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 sudo[280821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:19:46 compute-0 sudo[280821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:19:46 compute-0 sudo[280821]: pam_unix(sudo:session): session closed for user root
Jan 26 10:19:46 compute-0 ceph-mon[74456]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3843245282' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2724956292' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1985737188' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3473383718' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1745987047' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1702451587' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3799741745' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3462755018' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2009624782' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1215151384' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2500859101' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/376801596' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1043531508' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 26 10:19:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3704633609' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26678 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26687 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26693 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17073 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:46] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:19:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:46] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17079 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26705 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17085 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26711 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26377 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:47.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1338422426' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/782210037' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1380287593' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3007954583' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3704633609' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.26678 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.26687 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.26693 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2936063970' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.17073 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3805920001' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.17079 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.17085 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3333168501' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.26711 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: from='client.26377 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:47.200Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:19:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:47.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26723 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17100 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:19:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:47.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26395 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 26 10:19:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296280456' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26401 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26413 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17118 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:47 compute-0 nova_compute[254880]: 2026-01-26 10:19:47.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:19:47 compute-0 nova_compute[254880]: 2026-01-26 10:19:47.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26765 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 26 10:19:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4035918887' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26425 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17136 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26783 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26449 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:19:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:48.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:19:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.26723 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.17100 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.26395 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2801142467' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1296280456' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.26401 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.26413 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.17118 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/578049670' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.26765 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4035918887' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2480028327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26810 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26467 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:19:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:49.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:19:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17148 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 nova_compute[254880]: 2026-01-26 10:19:49.189 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3732424704' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26825 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:19:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:19:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17157 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515469255' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26506 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.26425 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.17136 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2429009617' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/734800606' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.26783 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.26449 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4149818100' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/470784433' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1554868477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.26810 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.26467 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3732424704' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1549787676' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1249600633' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1515469255' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2867308421' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:49 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:50 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26545 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:50 compute-0 nova_compute[254880]: 2026-01-26 10:19:50.317 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:50 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26924 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 26 10:19:51 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052496530' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:19:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:51.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='client.17148 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='client.26825 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='client.17157 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='client.26506 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/42955068' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:19:51 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:19:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:19:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:51.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:19:51 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17280 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:51 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26629 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 26 10:19:52 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2365034706' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.26545 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.26924 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3052496530' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2244219533' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3491128388' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.17280 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.26629 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/567662507' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4037000624' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2365034706' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Jan 26 10:19:52 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971769629' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 26 10:19:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 26 10:19:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/851279560' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 26 10:19:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:53.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26975 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1194734970' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 26 10:19:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/736816567' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 26 10:19:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3971769629' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 26 10:19:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1873884357' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 26 10:19:53 compute-0 ceph-mon[74456]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/344258398' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 26 10:19:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/851279560' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 26 10:19:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:19:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:53.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:19:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 26 10:19:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797688383' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 26 10:19:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:53.562Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26680 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17325 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:54 compute-0 nova_compute[254880]: 2026-01-26 10:19:54.190 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 26 10:19:54 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877010172' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.26975 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3217068959' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3797688383' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2117799381' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.26680 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.17325 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3853829658' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1444274541' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1877010172' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27002 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:19:54.705 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:19:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:19:54.705 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:19:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:19:54.705 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:19:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 26 10:19:54 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431985574' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26713 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:55.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17346 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:55 compute-0 nova_compute[254880]: 2026-01-26 10:19:55.319 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27017 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:55.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:55 compute-0 ceph-mon[74456]: from='client.27002 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3658239915' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mon[74456]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:19:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1431985574' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/898803205' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mon[74456]: from='client.26713 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/851808899' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27029 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 26 10:19:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896528404' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 26 10:19:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26731 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17364 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26740 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17370 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:56] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:19:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:19:56] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:19:56 compute-0 ceph-mon[74456]: from='client.17346 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mon[74456]: from='client.27017 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mon[74456]: from='client.27029 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3896528404' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mon[74456]: from='client.26731 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3198355035' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1046191194' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27059 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 26 10:19:56 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501247436' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 26 10:19:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:19:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:19:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:19:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:19:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:19:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:19:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:57.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27068 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:57.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 26 10:19:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966967997' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26761 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:19:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:57.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:19:57 compute-0 ovs-appctl[282904]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 10:19:57 compute-0 ovs-appctl[282909]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 10:19:57 compute-0 ovs-appctl[282919]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17397 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26773 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:57 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.17364 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.26740 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.17370 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3109192064' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.27059 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2501247436' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2516004710' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2966967997' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2271613595' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 26 10:19:58 compute-0 podman[283117]: 2026-01-26 10:19:58.135002396 +0000 UTC m=+0.058370565 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17406 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27107 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 26 10:19:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077096930' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27131 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:58.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:19:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:58.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:19:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:19:58.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:19:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:19:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 26 10:19:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874472771' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26812 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.27068 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.26761 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.17397 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.26773 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3463540812' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1353136906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1353136906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3548181160' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4077096930' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/88667544' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2874472771' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 26 10:19:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:19:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:19:59.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:19:59 compute-0 nova_compute[254880]: 2026-01-26 10:19:59.192 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:19:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17445 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26821 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:19:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:19:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:19:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:19:59.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:19:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:19:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17454 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Jan 26 10:20:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 26 10:20:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] :      osd.2 observed slow operation indications in BlueStore
Jan 26 10:20:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 26 10:20:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.najyrz on compute-2 is in error state
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.17406 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.27107 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.27131 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.26812 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3299309898' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1487676632' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3634761303' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2011035917' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: Health detail: HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Jan 26 10:20:00 compute-0 ceph-mon[74456]: [WRN] BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
Jan 26 10:20:00 compute-0 ceph-mon[74456]:      osd.2 observed slow operation indications in BlueStore
Jan 26 10:20:00 compute-0 ceph-mon[74456]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 26 10:20:00 compute-0 ceph-mon[74456]:     daemon nfs.cephfs.1.0.compute-2.najyrz on compute-2 is in error state
Jan 26 10:20:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 26 10:20:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410198663' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:20:00 compute-0 nova_compute[254880]: 2026-01-26 10:20:00.321 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:00 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27173 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 26 10:20:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758279747' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 26 10:20:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26857 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 26 10:20:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317730798' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:01.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.17445 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.26821 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.17454 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/410198663' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2107503081' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1050308051' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3758279747' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1400915670' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17484 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 26 10:20:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041293799' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.27173 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.26857 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1317730798' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3955527617' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.17484 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1848367617' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4171722842' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2041293799' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/626783217' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 26 10:20:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518384862' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27224 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 26 10:20:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324653960' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:03.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/657391317' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1518384862' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/465928881' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: from='client.27224 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2371224661' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2324653960' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2158557761' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 26 10:20:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2436935829' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26893 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 sshd-session[284430]: Invalid user zabbix from 157.245.76.178 port 48260
Jan 26 10:20:03 compute-0 sshd-session[284430]: Connection closed by invalid user zabbix 157.245.76.178 port 48260 [preauth]
Jan 26 10:20:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:03.566Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:03 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17523 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:20:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:03 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27248 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 26 10:20:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169316236' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 nova_compute[254880]: 2026-01-26 10:20:04.194 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2436935829' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='client.26893 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1688964418' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='client.17523 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/742367659' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='client.27248 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3169316236' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26914 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 26 10:20:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529800502' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27272 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:04 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27281 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17541 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:05.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:05 compute-0 nova_compute[254880]: 2026-01-26 10:20:05.323 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:05 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26932 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3772322359' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1535477781' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.26914 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2529800502' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.27272 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.27281 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.17541 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1774929671' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 26 10:20:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2033412290' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:05 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26938 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:05 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17565 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17571 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 sudo[284672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27308 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 sudo[284672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:06 compute-0 sudo[284672]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 26 10:20:06 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956985062' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2906224273' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mon[74456]: from='client.26932 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2033412290' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mon[74456]: from='client.26938 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mon[74456]: from='client.17565 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2459420877' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1227439692' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27314 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:06] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:20:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:06] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:20:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 26 10:20:06 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440398676' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:06 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26965 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:07.203Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:20:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:07.203Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:20:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:07.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17592 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.26968 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.17571 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.27308 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3170118741' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3956985062' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.27314 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2440398676' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.26965 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/634901022' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3521053892' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17604 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:07 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27338 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 virtqemud[254348]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 26 10:20:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 26 10:20:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2581328815' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27350 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 26 10:20:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3068562462' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27004 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 systemd[1]: Starting Time & Date Service...
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.17592 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.26968 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.17604 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/285396718' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.27338 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2581328815' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/139676970' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3068562462' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3007392388' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17631 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 systemd[1]: Started Time & Date Service.
Jan 26 10:20:08 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27010 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:08.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:09 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17643 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 nova_compute[254880]: 2026-01-26 10:20:09.196 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:09.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 26 10:20:09 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842003281' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.27350 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.27004 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.17631 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.27010 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4192985596' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3055550707' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2842003281' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:09 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3881451233' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:10 compute-0 sudo[285364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:20:10 compute-0 sudo[285364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:10 compute-0 sudo[285364]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 26 10:20:10 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473316423' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:10 compute-0 sudo[285389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 26 10:20:10 compute-0 sudo[285389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:10 compute-0 nova_compute[254880]: 2026-01-26 10:20:10.327 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:10 compute-0 sudo[285389]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:20:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:20:10 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:10 compute-0 podman[285430]: 2026-01-26 10:20:10.511840856 +0000 UTC m=+0.092976930 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 10:20:10 compute-0 sudo[285464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:20:10 compute-0 sudo[285464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:10 compute-0 sudo[285464]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:10 compute-0 sudo[285489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:20:10 compute-0 sudo[285489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:10 compute-0 ceph-mon[74456]: from='client.17643 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:20:10 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3473316423' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 26 10:20:10 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:10 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:11.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:11 compute-0 sudo[285489]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:20:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:20:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:20:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:20:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:20:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:20:11 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:20:11 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:20:11 compute-0 sudo[285548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:20:11 compute-0 sudo[285548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:11 compute-0 sudo[285548]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:11 compute-0 sudo[285573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:20:11 compute-0 sudo[285573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:11.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:11 compute-0 podman[285642]: 2026-01-26 10:20:11.724538112 +0000 UTC m=+0.040923744 container create 6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 10:20:11 compute-0 systemd[1]: Started libpod-conmon-6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376.scope.
Jan 26 10:20:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:20:11 compute-0 podman[285642]: 2026-01-26 10:20:11.797106651 +0000 UTC m=+0.113492303 container init 6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_albattani, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 10:20:11 compute-0 podman[285642]: 2026-01-26 10:20:11.705802396 +0000 UTC m=+0.022188058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:20:11 compute-0 ceph-mon[74456]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:11 compute-0 podman[285642]: 2026-01-26 10:20:11.803785208 +0000 UTC m=+0.120170850 container start 6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 10:20:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:20:11 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:20:11 compute-0 podman[285642]: 2026-01-26 10:20:11.807320502 +0000 UTC m=+0.123706164 container attach 6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 10:20:11 compute-0 wizardly_albattani[285659]: 167 167
Jan 26 10:20:11 compute-0 systemd[1]: libpod-6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376.scope: Deactivated successfully.
Jan 26 10:20:11 compute-0 podman[285642]: 2026-01-26 10:20:11.810111115 +0000 UTC m=+0.126496757 container died 6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Jan 26 10:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e427f480f97fa27140eb054e7f586b175de72c92c06f2670874e760068ef8b96-merged.mount: Deactivated successfully.
Jan 26 10:20:11 compute-0 podman[285642]: 2026-01-26 10:20:11.842927783 +0000 UTC m=+0.159313425 container remove 6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_albattani, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 10:20:11 compute-0 systemd[1]: libpod-conmon-6d6425982dd9a9a21644999fbc4693de0f4e7614bd4f6eb5c15f66fee03a2376.scope: Deactivated successfully.
Jan 26 10:20:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:12 compute-0 podman[285681]: 2026-01-26 10:20:12.013520335 +0000 UTC m=+0.045459613 container create c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Jan 26 10:20:12 compute-0 systemd[1]: Started libpod-conmon-c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01.scope.
Jan 26 10:20:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c66e61658f23f91ea1e38fa84e840cf5e000523d1519af35e722f2ed49a09d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c66e61658f23f91ea1e38fa84e840cf5e000523d1519af35e722f2ed49a09d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c66e61658f23f91ea1e38fa84e840cf5e000523d1519af35e722f2ed49a09d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c66e61658f23f91ea1e38fa84e840cf5e000523d1519af35e722f2ed49a09d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c66e61658f23f91ea1e38fa84e840cf5e000523d1519af35e722f2ed49a09d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:12 compute-0 podman[285681]: 2026-01-26 10:20:12.078272958 +0000 UTC m=+0.110212236 container init c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_raman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:20:12 compute-0 podman[285681]: 2026-01-26 10:20:12.083846155 +0000 UTC m=+0.115785433 container start c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_raman, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:20:12 compute-0 podman[285681]: 2026-01-26 10:20:12.086914556 +0000 UTC m=+0.118853864 container attach c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_raman, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:20:12 compute-0 podman[285681]: 2026-01-26 10:20:11.993106545 +0000 UTC m=+0.025045833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:20:12 compute-0 festive_raman[285697]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:20:12 compute-0 festive_raman[285697]: --> All data devices are unavailable
Jan 26 10:20:12 compute-0 systemd[1]: libpod-c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01.scope: Deactivated successfully.
Jan 26 10:20:12 compute-0 podman[285712]: 2026-01-26 10:20:12.430789082 +0000 UTC m=+0.023142664 container died c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 10:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c66e61658f23f91ea1e38fa84e840cf5e000523d1519af35e722f2ed49a09d-merged.mount: Deactivated successfully.
Jan 26 10:20:12 compute-0 podman[285712]: 2026-01-26 10:20:12.46778817 +0000 UTC m=+0.060141742 container remove c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_raman, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:20:12 compute-0 systemd[1]: libpod-conmon-c1d3d3508d67d656d7ed0998089f19b30948dcd933b1232e0ceb4c9deb4e1f01.scope: Deactivated successfully.
Jan 26 10:20:12 compute-0 sudo[285573]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:12 compute-0 sudo[285728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:20:12 compute-0 sudo[285728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:12 compute-0 sudo[285728]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:12 compute-0 sudo[285753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:20:12 compute-0 sudo[285753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:12 compute-0 ceph-mon[74456]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:13 compute-0 podman[285820]: 2026-01-26 10:20:13.011917052 +0000 UTC m=+0.035900911 container create 5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kepler, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:20:13 compute-0 systemd[1]: Started libpod-conmon-5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9.scope.
Jan 26 10:20:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:20:13 compute-0 podman[285820]: 2026-01-26 10:20:13.080351182 +0000 UTC m=+0.104335071 container init 5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kepler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:20:13 compute-0 podman[285820]: 2026-01-26 10:20:13.088308053 +0000 UTC m=+0.112291912 container start 5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:20:13 compute-0 compassionate_kepler[285837]: 167 167
Jan 26 10:20:13 compute-0 systemd[1]: libpod-5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9.scope: Deactivated successfully.
Jan 26 10:20:13 compute-0 podman[285820]: 2026-01-26 10:20:12.997647225 +0000 UTC m=+0.021631104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:20:13 compute-0 podman[285820]: 2026-01-26 10:20:13.093725615 +0000 UTC m=+0.117709494 container attach 5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 10:20:13 compute-0 podman[285820]: 2026-01-26 10:20:13.094072865 +0000 UTC m=+0.118056724 container died 5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-01ec40147286d84fd2217d643d11310040189ca9bc9b09fee3a3ed515e5dae63-merged.mount: Deactivated successfully.
Jan 26 10:20:13 compute-0 podman[285820]: 2026-01-26 10:20:13.134574077 +0000 UTC m=+0.158557936 container remove 5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 26 10:20:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:13.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:13 compute-0 systemd[1]: libpod-conmon-5a7618be099b5b86624327996d8d411dc4b45f73537786476def3f0b5c5a25f9.scope: Deactivated successfully.
Jan 26 10:20:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:13 compute-0 podman[285861]: 2026-01-26 10:20:13.269247519 +0000 UTC m=+0.024372487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:20:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:13.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:13 compute-0 podman[285861]: 2026-01-26 10:20:13.530865678 +0000 UTC m=+0.285990626 container create 0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:20:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:13.569Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:20:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:13.570Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:20:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:13.570Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:14 compute-0 nova_compute[254880]: 2026-01-26 10:20:14.199 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:14 compute-0 systemd[1]: Started libpod-conmon-0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57.scope.
Jan 26 10:20:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ae662b577c76b94935af0179c4bf737baed5d667099f90b5bb4ed5c0cde886/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ae662b577c76b94935af0179c4bf737baed5d667099f90b5bb4ed5c0cde886/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ae662b577c76b94935af0179c4bf737baed5d667099f90b5bb4ed5c0cde886/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ae662b577c76b94935af0179c4bf737baed5d667099f90b5bb4ed5c0cde886/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:14 compute-0 podman[285861]: 2026-01-26 10:20:14.85043008 +0000 UTC m=+1.605555058 container init 0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 10:20:14 compute-0 podman[285861]: 2026-01-26 10:20:14.856810529 +0000 UTC m=+1.611935477 container start 0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hopper, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:20:14 compute-0 podman[285861]: 2026-01-26 10:20:14.875356369 +0000 UTC m=+1.630481337 container attach 0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hopper, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 26 10:20:15 compute-0 ceph-mon[74456]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]: {
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:     "0": [
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:         {
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "devices": [
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "/dev/loop3"
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             ],
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "lv_name": "ceph_lv0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "lv_size": "21470642176",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "name": "ceph_lv0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "tags": {
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.cluster_name": "ceph",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.crush_device_class": "",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.encrypted": "0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.osd_id": "0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.type": "block",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.vdo": "0",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:                 "ceph.with_tpm": "0"
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             },
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "type": "block",
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:             "vg_name": "ceph_vg0"
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:         }
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]:     ]
Jan 26 10:20:15 compute-0 suspicious_hopper[285880]: }
Jan 26 10:20:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:15.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:15 compute-0 systemd[1]: libpod-0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57.scope: Deactivated successfully.
Jan 26 10:20:15 compute-0 podman[285861]: 2026-01-26 10:20:15.153693591 +0000 UTC m=+1.908818539 container died 0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 10:20:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0ae662b577c76b94935af0179c4bf737baed5d667099f90b5bb4ed5c0cde886-merged.mount: Deactivated successfully.
Jan 26 10:20:15 compute-0 podman[285861]: 2026-01-26 10:20:15.235126055 +0000 UTC m=+1.990251003 container remove 0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:20:15 compute-0 systemd[1]: libpod-conmon-0fe4ed2a4ceba7e7803359c70c23c76a0a0a3997bfc6740cab06c4b56ecf8b57.scope: Deactivated successfully.
Jan 26 10:20:15 compute-0 sudo[285753]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:15 compute-0 nova_compute[254880]: 2026-01-26 10:20:15.327 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:15 compute-0 sudo[285902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:20:15 compute-0 sudo[285902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:15 compute-0 sudo[285902]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:15 compute-0 sudo[285927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:20:15 compute-0 sudo[285927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:15.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:15 compute-0 podman[285992]: 2026-01-26 10:20:15.79750173 +0000 UTC m=+0.059766992 container create 07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:20:15 compute-0 podman[285992]: 2026-01-26 10:20:15.759622957 +0000 UTC m=+0.021888219 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:20:15 compute-0 systemd[1]: Started libpod-conmon-07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43.scope.
Jan 26 10:20:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:20:15 compute-0 podman[285992]: 2026-01-26 10:20:15.912394869 +0000 UTC m=+0.174660161 container init 07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sutherland, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:20:15 compute-0 podman[285992]: 2026-01-26 10:20:15.920361479 +0000 UTC m=+0.182626741 container start 07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:20:15 compute-0 podman[285992]: 2026-01-26 10:20:15.923752429 +0000 UTC m=+0.186017711 container attach 07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sutherland, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 10:20:15 compute-0 elastic_sutherland[286009]: 167 167
Jan 26 10:20:15 compute-0 systemd[1]: libpod-07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43.scope: Deactivated successfully.
Jan 26 10:20:15 compute-0 podman[285992]: 2026-01-26 10:20:15.926144432 +0000 UTC m=+0.188409714 container died 07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6ad3892ac07c2d321cf2f0e1259c076d9574ed170aeaac5966946bbc6f92655-merged.mount: Deactivated successfully.
Jan 26 10:20:15 compute-0 podman[285992]: 2026-01-26 10:20:15.95974788 +0000 UTC m=+0.222013142 container remove 07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 10:20:15 compute-0 systemd[1]: libpod-conmon-07274b2044e31f030c063526dcee1a2c80304a447a71f62ebd636ad307240a43.scope: Deactivated successfully.
Jan 26 10:20:16 compute-0 podman[286034]: 2026-01-26 10:20:16.095403108 +0000 UTC m=+0.023916923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:20:16 compute-0 podman[286034]: 2026-01-26 10:20:16.192669352 +0000 UTC m=+0.121183147 container create 7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:20:16 compute-0 systemd[1]: Started libpod-conmon-7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80.scope.
Jan 26 10:20:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3fb40ef73255e03aa1006c91713fcb8376ce98420baffd512c8142702de9b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3fb40ef73255e03aa1006c91713fcb8376ce98420baffd512c8142702de9b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3fb40ef73255e03aa1006c91713fcb8376ce98420baffd512c8142702de9b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3fb40ef73255e03aa1006c91713fcb8376ce98420baffd512c8142702de9b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:20:16 compute-0 podman[286034]: 2026-01-26 10:20:16.294309609 +0000 UTC m=+0.222823424 container init 7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:20:16 compute-0 podman[286034]: 2026-01-26 10:20:16.302747063 +0000 UTC m=+0.231260858 container start 7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_heisenberg, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:20:16 compute-0 podman[286034]: 2026-01-26 10:20:16.306598385 +0000 UTC m=+0.235112210 container attach 7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 10:20:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:16] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:20:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:16] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:20:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:17 compute-0 lvm[286127]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:20:17 compute-0 lvm[286127]: VG ceph_vg0 finished
Jan 26 10:20:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:17 compute-0 pedantic_heisenberg[286051]: {}
Jan 26 10:20:17 compute-0 systemd[1]: libpod-7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80.scope: Deactivated successfully.
Jan 26 10:20:17 compute-0 systemd[1]: libpod-7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80.scope: Consumed 1.283s CPU time.
Jan 26 10:20:17 compute-0 podman[286034]: 2026-01-26 10:20:17.061307126 +0000 UTC m=+0.989820921 container died 7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_heisenberg, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 26 10:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3fb40ef73255e03aa1006c91713fcb8376ce98420baffd512c8142702de9b8-merged.mount: Deactivated successfully.
Jan 26 10:20:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:17.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:17.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:17 compute-0 podman[286034]: 2026-01-26 10:20:17.22515503 +0000 UTC m=+1.153668825 container remove 7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:20:17 compute-0 sudo[285927]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:20:17 compute-0 systemd[1]: libpod-conmon-7ee57430cd817e030880a4a72c62f8cd30da734066378034ee363f93af3d1d80.scope: Deactivated successfully.
Jan 26 10:20:17 compute-0 ceph-mon[74456]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:17.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:17 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:20:17 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:17 compute-0 sudo[286144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:20:17 compute-0 sudo[286144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:17 compute-0 sudo[286144]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:20:18
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'vms', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'cephfs.cephfs.data']
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:20:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:20:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:20:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:20:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:18.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:19.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:19 compute-0 nova_compute[254880]: 2026-01-26 10:20:19.203 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:20:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:20:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:19 compute-0 ceph-mon[74456]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:20:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:20 compute-0 nova_compute[254880]: 2026-01-26 10:20:20.329 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:20 compute-0 ceph-mon[74456]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:21.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:20:22 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3943 syncs, 3.29 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2161 writes, 6920 keys, 2161 commit groups, 1.0 writes per commit group, ingest: 6.55 MB, 0.01 MB/s
                                           Interval WAL: 2161 writes, 930 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 10:20:22 compute-0 ceph-mon[74456]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:20:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:23.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:23.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:23.571Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:24 compute-0 nova_compute[254880]: 2026-01-26 10:20:24.244 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:24 compute-0 ceph-mon[74456]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:20:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:25.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:20:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:25 compute-0 nova_compute[254880]: 2026-01-26 10:20:25.350 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:25.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:26 compute-0 sudo[286180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:20:26 compute-0 sudo[286180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:26 compute-0 sudo[286180]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:26] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:20:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:26] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:20:26 compute-0 ceph-mon[74456]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:27.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:27.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:27.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:28 compute-0 ceph-mon[74456]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:20:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:28.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:20:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:28.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:20:29 compute-0 podman[286209]: 2026-01-26 10:20:29.146277457 +0000 UTC m=+0.069355385 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:20:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:29.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:29 compute-0 nova_compute[254880]: 2026-01-26 10:20:29.246 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:29.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:30 compute-0 nova_compute[254880]: 2026-01-26 10:20:30.352 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:30 compute-0 ceph-mon[74456]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:31.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:32 compute-0 ceph-mon[74456]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:33.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:33.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:33.572Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:20:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:34 compute-0 nova_compute[254880]: 2026-01-26 10:20:34.249 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:34 compute-0 ceph-mon[74456]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:35.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:35 compute-0 nova_compute[254880]: 2026-01-26 10:20:35.354 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:36 compute-0 ceph-mon[74456]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:36] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:20:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:36] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:20:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:37.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:37.206Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:20:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:37.206Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:20:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:37.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:37.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:37 compute-0 nova_compute[254880]: 2026-01-26 10:20:37.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:37 compute-0 nova_compute[254880]: 2026-01-26 10:20:37.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:37 compute-0 nova_compute[254880]: 2026-01-26 10:20:37.985 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:20:37 compute-0 nova_compute[254880]: 2026-01-26 10:20:37.986 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:20:37 compute-0 nova_compute[254880]: 2026-01-26 10:20:37.986 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:20:37 compute-0 nova_compute[254880]: 2026-01-26 10:20:37.986 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:20:37 compute-0 nova_compute[254880]: 2026-01-26 10:20:37.987 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:20:38 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:20:38 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229155004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:38 compute-0 nova_compute[254880]: 2026-01-26 10:20:38.498 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:20:38 compute-0 ceph-mon[74456]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:38 compute-0 nova_compute[254880]: 2026-01-26 10:20:38.709 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:20:38 compute-0 nova_compute[254880]: 2026-01-26 10:20:38.711 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4352MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:20:38 compute-0 nova_compute[254880]: 2026-01-26 10:20:38.711 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:20:38 compute-0 nova_compute[254880]: 2026-01-26 10:20:38.712 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:20:38 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 10:20:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:38.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:38 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.080 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.080 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.098 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:20:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:39.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.263 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:39.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:20:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914856423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.603 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.613 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:20:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1229155004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3914856423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.647 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.657 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:20:39 compute-0 nova_compute[254880]: 2026-01-26 10:20:39.657 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:20:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:40 compute-0 nova_compute[254880]: 2026-01-26 10:20:40.354 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:40 compute-0 ceph-mon[74456]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:41 compute-0 podman[286291]: 2026-01-26 10:20:41.163790293 +0000 UTC m=+0.087213927 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 26 10:20:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:20:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:41.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:20:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:41.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:41 compute-0 nova_compute[254880]: 2026-01-26 10:20:41.658 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:41 compute-0 nova_compute[254880]: 2026-01-26 10:20:41.659 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:20:41 compute-0 nova_compute[254880]: 2026-01-26 10:20:41.659 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:20:41 compute-0 nova_compute[254880]: 2026-01-26 10:20:41.680 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:20:41 compute-0 nova_compute[254880]: 2026-01-26 10:20:41.681 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:42 compute-0 ceph-mon[74456]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4260343528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:42 compute-0 nova_compute[254880]: 2026-01-26 10:20:42.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:42 compute-0 nova_compute[254880]: 2026-01-26 10:20:42.960 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:20:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:43.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:20:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:43.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:43.573Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:43 compute-0 nova_compute[254880]: 2026-01-26 10:20:43.955 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3327021470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:44 compute-0 nova_compute[254880]: 2026-01-26 10:20:44.266 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:44 compute-0 sshd-session[286319]: Invalid user zabbix from 157.245.76.178 port 40272
Jan 26 10:20:44 compute-0 sshd-session[286319]: Connection closed by invalid user zabbix 157.245.76.178 port 40272 [preauth]
Jan 26 10:20:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:45 compute-0 ceph-mon[74456]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:45.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:45 compute-0 nova_compute[254880]: 2026-01-26 10:20:45.399 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:46 compute-0 sudo[286323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:20:46 compute-0 sudo[286323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:20:46 compute-0 sudo[286323]: pam_unix(sudo:session): session closed for user root
Jan 26 10:20:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:46] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:20:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:46] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:20:46 compute-0 nova_compute[254880]: 2026-01-26 10:20:46.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:47 compute-0 ceph-mon[74456]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:47.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:47.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:20:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:47.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:47.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:47 compute-0 nova_compute[254880]: 2026-01-26 10:20:47.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:20:47 compute-0 nova_compute[254880]: 2026-01-26 10:20:47.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:20:48 compute-0 ceph-mon[74456]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:20:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:20:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:20:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:20:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:20:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:20:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:20:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:48.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:49.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/355653575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:20:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:49 compute-0 nova_compute[254880]: 2026-01-26 10:20:49.268 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:49.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:50 compute-0 ceph-mon[74456]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1812899865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:20:50 compute-0 nova_compute[254880]: 2026-01-26 10:20:50.401 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:51.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:51.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:52 compute-0 ceph-mon[74456]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:53.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:53.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:53.574Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:20:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:53.575Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:54 compute-0 nova_compute[254880]: 2026-01-26 10:20:54.318 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:54 compute-0 ceph-mon[74456]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:20:54.705 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:20:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:20:54.706 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:20:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:20:54.706 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:20:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:20:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:20:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:55.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:20:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:55 compute-0 nova_compute[254880]: 2026-01-26 10:20:55.402 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:55.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:56 compute-0 ceph-mon[74456]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:20:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:56] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:20:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:20:56] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:20:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:20:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:20:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:20:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:20:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:20:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:57.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:57.209Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:20:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:57.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:57.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:58 compute-0 ceph-mon[74456]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3073080115' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:20:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3073080115' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:20:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:58.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:20:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:58.881Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:20:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:20:58.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:20:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:20:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:20:59.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:20:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:20:59 compute-0 nova_compute[254880]: 2026-01-26 10:20:59.321 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:20:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:20:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:20:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:20:59.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:20:59 compute-0 podman[286362]: 2026-01-26 10:20:59.780950621 +0000 UTC m=+0.056174499 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 10:20:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:00 compute-0 nova_compute[254880]: 2026-01-26 10:21:00.449 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:00 compute-0 ceph-mon[74456]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:01.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:02 compute-0 ceph-mon[74456]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:03.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:03.575Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:21:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:03.576Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:21:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:03.576Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:21:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:04 compute-0 nova_compute[254880]: 2026-01-26 10:21:04.322 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:04 compute-0 ceph-mon[74456]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:05.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:05 compute-0 nova_compute[254880]: 2026-01-26 10:21:05.490 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:05.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:06 compute-0 sudo[286387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:21:06 compute-0 sudo[286387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:06 compute-0 sudo[286387]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:06] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:21:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:06] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:21:06 compute-0 ceph-mon[74456]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:07.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:07.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:07.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:07 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 26 10:21:07 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:07.986916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:21:07 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 26 10:21:07 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422867986984, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1941, "num_deletes": 506, "total_data_size": 2644776, "memory_usage": 2710752, "flush_reason": "Manual Compaction"}
Jan 26 10:21:07 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422868016430, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2581045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32720, "largest_seqno": 34660, "table_properties": {"data_size": 2571773, "index_size": 4934, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 27614, "raw_average_key_size": 21, "raw_value_size": 2549462, "raw_average_value_size": 1967, "num_data_blocks": 212, "num_entries": 1296, "num_filter_entries": 1296, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422755, "oldest_key_time": 1769422755, "file_creation_time": 1769422867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 29572 microseconds, and 7552 cpu microseconds.
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.016494) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2581045 bytes OK
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.016519) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.026972) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.027033) EVENT_LOG_v1 {"time_micros": 1769422868027020, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.027061) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2634379, prev total WAL file size 2634379, number of live WAL files 2.
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.028462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2520KB)], [71(14MB)]
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422868028496, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17531101, "oldest_snapshot_seqno": -1}
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6546 keys, 15261329 bytes, temperature: kUnknown
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422868114242, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 15261329, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15216494, "index_size": 27379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172057, "raw_average_key_size": 26, "raw_value_size": 15097441, "raw_average_value_size": 2306, "num_data_blocks": 1082, "num_entries": 6546, "num_filter_entries": 6546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.114510) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15261329 bytes
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.121507) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.3 rd, 177.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 14.3 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(12.7) write-amplify(5.9) OK, records in: 7573, records dropped: 1027 output_compression: NoCompression
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.121548) EVENT_LOG_v1 {"time_micros": 1769422868121532, "job": 40, "event": "compaction_finished", "compaction_time_micros": 85815, "compaction_time_cpu_micros": 36189, "output_level": 6, "num_output_files": 1, "total_output_size": 15261329, "num_input_records": 7573, "num_output_records": 6546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422868122143, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422868124597, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.028389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.124625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.124630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.124631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.124633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:08 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:08.124636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:08 compute-0 sudo[278048]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:08 compute-0 sshd-session[278047]: Received disconnect from 192.168.122.10 port 47990:11: disconnected by user
Jan 26 10:21:08 compute-0 sshd-session[278047]: Disconnected from user zuul 192.168.122.10 port 47990
Jan 26 10:21:08 compute-0 sshd-session[278044]: pam_unix(sshd:session): session closed for user zuul
Jan 26 10:21:08 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Jan 26 10:21:08 compute-0 systemd[1]: session-56.scope: Consumed 3min 1.543s CPU time, 782.8M memory peak, read 263.4M from disk, written 84.5M to disk.
Jan 26 10:21:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:08.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:08 compute-0 systemd-logind[787]: Session 56 logged out. Waiting for processes to exit.
Jan 26 10:21:08 compute-0 systemd-logind[787]: Removed session 56.
Jan 26 10:21:09 compute-0 ceph-mon[74456]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:09 compute-0 sshd-session[286416]: Accepted publickey for zuul from 192.168.122.10 port 45570 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 10:21:09 compute-0 systemd-logind[787]: New session 57 of user zuul.
Jan 26 10:21:09 compute-0 systemd[1]: Started Session 57 of User zuul.
Jan 26 10:21:09 compute-0 sshd-session[286416]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 10:21:09 compute-0 sudo[286420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-26-spfdsts.tar.xz
Jan 26 10:21:09 compute-0 sudo[286420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:21:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:09.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:09 compute-0 sudo[286420]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:09 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Jan 26 10:21:09 compute-0 sshd-session[286419]: Received disconnect from 192.168.122.10 port 45570:11: disconnected by user
Jan 26 10:21:09 compute-0 sshd-session[286419]: Disconnected from user zuul 192.168.122.10 port 45570
Jan 26 10:21:09 compute-0 systemd-logind[787]: Session 57 logged out. Waiting for processes to exit.
Jan 26 10:21:09 compute-0 sshd-session[286416]: pam_unix(sshd:session): session closed for user zuul
Jan 26 10:21:09 compute-0 systemd-logind[787]: Removed session 57.
Jan 26 10:21:09 compute-0 nova_compute[254880]: 2026-01-26 10:21:09.355 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:09 compute-0 sshd-session[286445]: Accepted publickey for zuul from 192.168.122.10 port 45578 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 10:21:09 compute-0 systemd-logind[787]: New session 58 of user zuul.
Jan 26 10:21:09 compute-0 systemd[1]: Started Session 58 of User zuul.
Jan 26 10:21:09 compute-0 sshd-session[286445]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 10:21:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:09.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:09 compute-0 sudo[286449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 26 10:21:09 compute-0 sudo[286449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:21:09 compute-0 sudo[286449]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:09 compute-0 sshd-session[286448]: Received disconnect from 192.168.122.10 port 45578:11: disconnected by user
Jan 26 10:21:09 compute-0 sshd-session[286448]: Disconnected from user zuul 192.168.122.10 port 45578
Jan 26 10:21:09 compute-0 sshd-session[286445]: pam_unix(sshd:session): session closed for user zuul
Jan 26 10:21:09 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Jan 26 10:21:09 compute-0 systemd-logind[787]: Session 58 logged out. Waiting for processes to exit.
Jan 26 10:21:09 compute-0 systemd-logind[787]: Removed session 58.
Jan 26 10:21:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:10 compute-0 ceph-mon[74456]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:10 compute-0 nova_compute[254880]: 2026-01-26 10:21:10.492 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:11.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:11.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:12 compute-0 podman[286476]: 2026-01-26 10:21:12.167016259 +0000 UTC m=+0.096945783 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 26 10:21:12 compute-0 ceph-mon[74456]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:13.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:13.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:13.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:14 compute-0 nova_compute[254880]: 2026-01-26 10:21:14.358 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:14 compute-0 ceph-mon[74456]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:15 compute-0 nova_compute[254880]: 2026-01-26 10:21:15.494 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:15.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:16] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:21:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:16] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:21:16 compute-0 ceph-mon[74456]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:17.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:17.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:17.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:18 compute-0 sudo[286508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:21:18 compute-0 sudo[286508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:18 compute-0 sudo[286508]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:18 compute-0 sudo[286533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:21:18 compute-0 sudo[286533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:21:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:21:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:21:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:21:18
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'images', '.nfs', 'default.rgw.meta', 'backups', '.rgw.root', 'vms']
Jan 26 10:21:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:21:18 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mon[74456]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:21:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:18 compute-0 sudo[286533]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:21:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:21:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:18.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:21:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:18.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:19.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:21:19 compute-0 nova_compute[254880]: 2026-01-26 10:21:19.361 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:21:19 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:21:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:21:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:21:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:19.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:21:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:21:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:21:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:21:19 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:21:19 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:21:19 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:21:19 compute-0 sudo[286591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:21:19 compute-0 sudo[286591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:19 compute-0 sudo[286591]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:19 compute-0 sudo[286616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:21:19 compute-0 sudo[286616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:20 compute-0 podman[286681]: 2026-01-26 10:21:20.339383108 +0000 UTC m=+0.042916614 container create 728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 10:21:20 compute-0 systemd[1]: Started libpod-conmon-728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33.scope.
Jan 26 10:21:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:21:20 compute-0 podman[286681]: 2026-01-26 10:21:20.321016212 +0000 UTC m=+0.024549738 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:21:20 compute-0 podman[286681]: 2026-01-26 10:21:20.417731638 +0000 UTC m=+0.121265164 container init 728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bell, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 10:21:20 compute-0 podman[286681]: 2026-01-26 10:21:20.424221217 +0000 UTC m=+0.127754713 container start 728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 26 10:21:20 compute-0 podman[286681]: 2026-01-26 10:21:20.427552329 +0000 UTC m=+0.131085835 container attach 728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bell, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:21:20 compute-0 sharp_bell[286698]: 167 167
Jan 26 10:21:20 compute-0 systemd[1]: libpod-728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33.scope: Deactivated successfully.
Jan 26 10:21:20 compute-0 podman[286681]: 2026-01-26 10:21:20.429922804 +0000 UTC m=+0.133456340 container died 728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1fec6e61eeac4a0f71244cadb38aad1a152f513e3524a6c6db5b5354ea66ed4-merged.mount: Deactivated successfully.
Jan 26 10:21:20 compute-0 podman[286681]: 2026-01-26 10:21:20.498091953 +0000 UTC m=+0.201625459 container remove 728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 10:21:20 compute-0 nova_compute[254880]: 2026-01-26 10:21:20.497 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:20 compute-0 systemd[1]: libpod-conmon-728b2ffeb7d7c692d48177f6bc7169af4109e72f28dfd95a8ab00522e1490a33.scope: Deactivated successfully.
Jan 26 10:21:20 compute-0 podman[286723]: 2026-01-26 10:21:20.739447037 +0000 UTC m=+0.060998183 container create c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:21:20 compute-0 systemd[1]: Started libpod-conmon-c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7.scope.
Jan 26 10:21:20 compute-0 podman[286723]: 2026-01-26 10:21:20.716299798 +0000 UTC m=+0.037850984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:21:20 compute-0 ceph-mon[74456]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:20 compute-0 ceph-mon[74456]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:21:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:21:20 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:21:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abefcefba64e47ac44283e07d97db9d39d2dd614a5b4c77581e601175261717e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abefcefba64e47ac44283e07d97db9d39d2dd614a5b4c77581e601175261717e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abefcefba64e47ac44283e07d97db9d39d2dd614a5b4c77581e601175261717e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abefcefba64e47ac44283e07d97db9d39d2dd614a5b4c77581e601175261717e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abefcefba64e47ac44283e07d97db9d39d2dd614a5b4c77581e601175261717e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:20 compute-0 podman[286723]: 2026-01-26 10:21:20.844840492 +0000 UTC m=+0.166391668 container init c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:21:20 compute-0 podman[286723]: 2026-01-26 10:21:20.85817177 +0000 UTC m=+0.179722926 container start c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keller, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 10:21:20 compute-0 podman[286723]: 2026-01-26 10:21:20.861713117 +0000 UTC m=+0.183264293 container attach c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keller, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Jan 26 10:21:21 compute-0 blissful_keller[286741]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:21:21 compute-0 blissful_keller[286741]: --> All data devices are unavailable
Jan 26 10:21:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:21 compute-0 systemd[1]: libpod-c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7.scope: Deactivated successfully.
Jan 26 10:21:21 compute-0 podman[286723]: 2026-01-26 10:21:21.241030644 +0000 UTC m=+0.562581810 container died c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 26 10:21:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-abefcefba64e47ac44283e07d97db9d39d2dd614a5b4c77581e601175261717e-merged.mount: Deactivated successfully.
Jan 26 10:21:21 compute-0 podman[286723]: 2026-01-26 10:21:21.29383934 +0000 UTC m=+0.615390496 container remove c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keller, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:21:21 compute-0 systemd[1]: libpod-conmon-c0024844178883a3996a97bea1f3a0074576a0edf5e0c31e6ce34b976beaa3a7.scope: Deactivated successfully.
Jan 26 10:21:21 compute-0 sudo[286616]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:21 compute-0 sudo[286770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:21:21 compute-0 sudo[286770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:21 compute-0 sudo[286770]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:21 compute-0 sudo[286795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:21:21 compute-0 sudo[286795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:21:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:21:21 compute-0 podman[286860]: 2026-01-26 10:21:21.899440854 +0000 UTC m=+0.047525451 container create 02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:21:21 compute-0 systemd[1]: Started libpod-conmon-02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a.scope.
Jan 26 10:21:21 compute-0 podman[286860]: 2026-01-26 10:21:21.881379696 +0000 UTC m=+0.029464303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:21:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:21:21 compute-0 podman[286860]: 2026-01-26 10:21:21.998107565 +0000 UTC m=+0.146192172 container init 02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:21:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:22 compute-0 podman[286860]: 2026-01-26 10:21:22.007423181 +0000 UTC m=+0.155507768 container start 02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_allen, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 26 10:21:22 compute-0 podman[286860]: 2026-01-26 10:21:22.01138625 +0000 UTC m=+0.159470887 container attach 02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:21:22 compute-0 competent_allen[286877]: 167 167
Jan 26 10:21:22 compute-0 systemd[1]: libpod-02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a.scope: Deactivated successfully.
Jan 26 10:21:22 compute-0 podman[286860]: 2026-01-26 10:21:22.015829873 +0000 UTC m=+0.163914470 container died 02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_allen, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-575d9b6a58aeea85c94b9e00f15acd144315a0b5e2570ec6ed2697367951cba5-merged.mount: Deactivated successfully.
Jan 26 10:21:22 compute-0 podman[286860]: 2026-01-26 10:21:22.054354545 +0000 UTC m=+0.202439142 container remove 02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_allen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 26 10:21:22 compute-0 systemd[1]: libpod-conmon-02a6ae611b0170ec9af1d504b35590ed52bd8de5cac16ed0eec2fefc57e50d7a.scope: Deactivated successfully.
Jan 26 10:21:22 compute-0 podman[286901]: 2026-01-26 10:21:22.256536648 +0000 UTC m=+0.052186849 container create 1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 10:21:22 compute-0 systemd[1]: Started libpod-conmon-1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f.scope.
Jan 26 10:21:22 compute-0 podman[286901]: 2026-01-26 10:21:22.237811262 +0000 UTC m=+0.033461483 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:21:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42d5768961b190ec0bd249bba0a7145fca5843cee362752e01d848cbc617f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42d5768961b190ec0bd249bba0a7145fca5843cee362752e01d848cbc617f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42d5768961b190ec0bd249bba0a7145fca5843cee362752e01d848cbc617f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42d5768961b190ec0bd249bba0a7145fca5843cee362752e01d848cbc617f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:22 compute-0 podman[286901]: 2026-01-26 10:21:22.360244977 +0000 UTC m=+0.155895178 container init 1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:21:22 compute-0 podman[286901]: 2026-01-26 10:21:22.368671229 +0000 UTC m=+0.164321430 container start 1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:21:22 compute-0 podman[286901]: 2026-01-26 10:21:22.373104362 +0000 UTC m=+0.168754573 container attach 1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:21:22 compute-0 brave_pare[286918]: {
Jan 26 10:21:22 compute-0 brave_pare[286918]:     "0": [
Jan 26 10:21:22 compute-0 brave_pare[286918]:         {
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "devices": [
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "/dev/loop3"
Jan 26 10:21:22 compute-0 brave_pare[286918]:             ],
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "lv_name": "ceph_lv0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "lv_size": "21470642176",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "name": "ceph_lv0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "tags": {
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.cluster_name": "ceph",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.crush_device_class": "",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.encrypted": "0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.osd_id": "0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.type": "block",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.vdo": "0",
Jan 26 10:21:22 compute-0 brave_pare[286918]:                 "ceph.with_tpm": "0"
Jan 26 10:21:22 compute-0 brave_pare[286918]:             },
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "type": "block",
Jan 26 10:21:22 compute-0 brave_pare[286918]:             "vg_name": "ceph_vg0"
Jan 26 10:21:22 compute-0 brave_pare[286918]:         }
Jan 26 10:21:22 compute-0 brave_pare[286918]:     ]
Jan 26 10:21:22 compute-0 brave_pare[286918]: }
Jan 26 10:21:22 compute-0 systemd[1]: libpod-1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f.scope: Deactivated successfully.
Jan 26 10:21:22 compute-0 podman[286901]: 2026-01-26 10:21:22.653335827 +0000 UTC m=+0.448986028 container died 1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42d5768961b190ec0bd249bba0a7145fca5843cee362752e01d848cbc617f28-merged.mount: Deactivated successfully.
Jan 26 10:21:22 compute-0 podman[286901]: 2026-01-26 10:21:22.696795226 +0000 UTC m=+0.492445427 container remove 1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_pare, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:21:22 compute-0 systemd[1]: libpod-conmon-1e2b78ac37eda5051831354d12c9d825af1e08017ffd0d7dedb9d41b77a0560f.scope: Deactivated successfully.
Jan 26 10:21:22 compute-0 sudo[286795]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:22 compute-0 sudo[286941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:21:22 compute-0 sudo[286941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:22 compute-0 sudo[286941]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:22 compute-0 ceph-mon[74456]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:22 compute-0 sudo[286966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:21:22 compute-0 sudo[286966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:23.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:23 compute-0 podman[287032]: 2026-01-26 10:21:23.324417297 +0000 UTC m=+0.041610388 container create 652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 26 10:21:23 compute-0 systemd[1]: Started libpod-conmon-652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa.scope.
Jan 26 10:21:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:21:23 compute-0 podman[287032]: 2026-01-26 10:21:23.30603396 +0000 UTC m=+0.023227061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:21:23 compute-0 podman[287032]: 2026-01-26 10:21:23.414677786 +0000 UTC m=+0.131870897 container init 652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_yalow, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 26 10:21:23 compute-0 podman[287032]: 2026-01-26 10:21:23.423221711 +0000 UTC m=+0.140414802 container start 652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:21:23 compute-0 podman[287032]: 2026-01-26 10:21:23.427321424 +0000 UTC m=+0.144514515 container attach 652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_yalow, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:21:23 compute-0 zealous_yalow[287048]: 167 167
Jan 26 10:21:23 compute-0 systemd[1]: libpod-652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa.scope: Deactivated successfully.
Jan 26 10:21:23 compute-0 conmon[287048]: conmon 652d2ec9d34288c02959 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa.scope/container/memory.events
Jan 26 10:21:23 compute-0 podman[287032]: 2026-01-26 10:21:23.430411569 +0000 UTC m=+0.147604660 container died 652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a6af99dcb8021c30f4a76969406587ffcbf0cc4b9aed3eef83a1c2c8f875ed3-merged.mount: Deactivated successfully.
Jan 26 10:21:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:23 compute-0 podman[287032]: 2026-01-26 10:21:23.479605995 +0000 UTC m=+0.196799116 container remove 652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 26 10:21:23 compute-0 systemd[1]: libpod-conmon-652d2ec9d34288c0295905d8045ed0eedc705e5bdaf257b117f3fce2b9f3f6aa.scope: Deactivated successfully.
Jan 26 10:21:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:23.577Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:23 compute-0 podman[287072]: 2026-01-26 10:21:23.721247216 +0000 UTC m=+0.076199331 container create 5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 10:21:23 compute-0 systemd[1]: Started libpod-conmon-5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022.scope.
Jan 26 10:21:23 compute-0 podman[287072]: 2026-01-26 10:21:23.691502926 +0000 UTC m=+0.046455031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:21:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2daf71ac9c9d03ffd843a7a4365a3bf7a7aaa32ecd91c86579841112a13366/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2daf71ac9c9d03ffd843a7a4365a3bf7a7aaa32ecd91c86579841112a13366/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2daf71ac9c9d03ffd843a7a4365a3bf7a7aaa32ecd91c86579841112a13366/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2daf71ac9c9d03ffd843a7a4365a3bf7a7aaa32ecd91c86579841112a13366/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:21:23 compute-0 podman[287072]: 2026-01-26 10:21:23.822491478 +0000 UTC m=+0.177443603 container init 5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lamarr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 10:21:23 compute-0 podman[287072]: 2026-01-26 10:21:23.830932491 +0000 UTC m=+0.185884576 container start 5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:21:23 compute-0 podman[287072]: 2026-01-26 10:21:23.834693225 +0000 UTC m=+0.189645330 container attach 5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lamarr, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:21:24 compute-0 nova_compute[254880]: 2026-01-26 10:21:24.408 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:24 compute-0 lvm[287163]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:21:24 compute-0 lvm[287163]: VG ceph_vg0 finished
Jan 26 10:21:24 compute-0 beautiful_lamarr[287088]: {}
Jan 26 10:21:24 compute-0 systemd[1]: libpod-5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022.scope: Deactivated successfully.
Jan 26 10:21:24 compute-0 systemd[1]: libpod-5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022.scope: Consumed 1.304s CPU time.
Jan 26 10:21:24 compute-0 podman[287072]: 2026-01-26 10:21:24.696021279 +0000 UTC m=+1.050973394 container died 5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lamarr, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 10:21:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb2daf71ac9c9d03ffd843a7a4365a3bf7a7aaa32ecd91c86579841112a13366-merged.mount: Deactivated successfully.
Jan 26 10:21:24 compute-0 podman[287072]: 2026-01-26 10:21:24.753987446 +0000 UTC m=+1.108939531 container remove 5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 10:21:24 compute-0 systemd[1]: libpod-conmon-5c171a22dc6fa5b42b4dfb0ac56883dcfb5201cac90b20c149eb8711ebf89022.scope: Deactivated successfully.
Jan 26 10:21:24 compute-0 sudo[286966]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:21:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:21:24 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:25 compute-0 sudo[287182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:21:25 compute-0 sudo[287182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:25 compute-0 sudo[287182]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:25 compute-0 ceph-mon[74456]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:25 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:21:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:25.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:25 compute-0 nova_compute[254880]: 2026-01-26 10:21:25.499 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:25 compute-0 sshd-session[287207]: Invalid user zabbix from 157.245.76.178 port 39492
Jan 26 10:21:26 compute-0 sshd-session[287207]: Connection closed by invalid user zabbix 157.245.76.178 port 39492 [preauth]
Jan 26 10:21:26 compute-0 ceph-mon[74456]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:26 compute-0 sudo[287210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:21:26 compute-0 sudo[287210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:26 compute-0 sudo[287210]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:21:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:21:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:27.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:27.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:27.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:28 compute-0 ceph-mon[74456]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:28.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:29.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:29 compute-0 nova_compute[254880]: 2026-01-26 10:21:29.412 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:29.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:30 compute-0 podman[287238]: 2026-01-26 10:21:30.161255749 +0000 UTC m=+0.075851302 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:21:30 compute-0 nova_compute[254880]: 2026-01-26 10:21:30.501 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:30 compute-0 ceph-mon[74456]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:21:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:31.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:31.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:32 compute-0 ceph-mon[74456]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:33.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:33.580Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:21:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:33.581Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:21:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:33.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:21:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:34 compute-0 nova_compute[254880]: 2026-01-26 10:21:34.416 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:34 compute-0 ceph-mon[74456]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:35.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:35 compute-0 nova_compute[254880]: 2026-01-26 10:21:35.531 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:36] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:21:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:36] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:21:36 compute-0 ceph-mon[74456]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:37.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:37.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:37.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:38.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:38 compute-0 nova_compute[254880]: 2026-01-26 10:21:38.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:38 compute-0 ceph-mon[74456]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:38 compute-0 nova_compute[254880]: 2026-01-26 10:21:38.979 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:21:38 compute-0 nova_compute[254880]: 2026-01-26 10:21:38.979 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:21:38 compute-0 nova_compute[254880]: 2026-01-26 10:21:38.979 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:21:38 compute-0 nova_compute[254880]: 2026-01-26 10:21:38.980 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:21:38 compute-0 nova_compute[254880]: 2026-01-26 10:21:38.980 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:21:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:39.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:21:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482975074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:39 compute-0 nova_compute[254880]: 2026-01-26 10:21:39.415 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:21:39 compute-0 nova_compute[254880]: 2026-01-26 10:21:39.460 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:39.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:39 compute-0 nova_compute[254880]: 2026-01-26 10:21:39.664 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:21:39 compute-0 nova_compute[254880]: 2026-01-26 10:21:39.665 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4447MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:21:39 compute-0 nova_compute[254880]: 2026-01-26 10:21:39.666 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:21:39 compute-0 nova_compute[254880]: 2026-01-26 10:21:39.666 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:21:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.918414) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422899918575, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 552, "num_deletes": 250, "total_data_size": 749327, "memory_usage": 758952, "flush_reason": "Manual Compaction"}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422899927948, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 590931, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34661, "largest_seqno": 35212, "table_properties": {"data_size": 587990, "index_size": 913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7809, "raw_average_key_size": 21, "raw_value_size": 581968, "raw_average_value_size": 1572, "num_data_blocks": 37, "num_entries": 370, "num_filter_entries": 370, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422869, "oldest_key_time": 1769422869, "file_creation_time": 1769422899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 9597 microseconds, and 6169 cpu microseconds.
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.928030) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 590931 bytes OK
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.928065) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.929743) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.929765) EVENT_LOG_v1 {"time_micros": 1769422899929758, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.929794) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 746245, prev total WAL file size 746245, number of live WAL files 2.
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.930586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323531' seq:0, type:0; will stop at (end)
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(577KB)], [74(14MB)]
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422899930645, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15852260, "oldest_snapshot_seqno": -1}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6408 keys, 11851449 bytes, temperature: kUnknown
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422899988052, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11851449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11811940, "index_size": 22375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 169376, "raw_average_key_size": 26, "raw_value_size": 11699670, "raw_average_value_size": 1825, "num_data_blocks": 874, "num_entries": 6408, "num_filter_entries": 6408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769422899, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.988465) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11851449 bytes
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.989934) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 275.3 rd, 205.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.6 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(46.9) write-amplify(20.1) OK, records in: 6916, records dropped: 508 output_compression: NoCompression
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.989963) EVENT_LOG_v1 {"time_micros": 1769422899989948, "job": 42, "event": "compaction_finished", "compaction_time_micros": 57579, "compaction_time_cpu_micros": 27951, "output_level": 6, "num_output_files": 1, "total_output_size": 11851449, "num_input_records": 6916, "num_output_records": 6408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422899990341, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769422899994105, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.930527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.994361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.994383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.994386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.994388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:39 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:21:39.994390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.033 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.033 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:21:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1482975074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.256 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.532 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:21:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032313011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.691 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.697 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.717 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.719 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:21:40 compute-0 nova_compute[254880]: 2026-01-26 10:21:40.719 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:21:41 compute-0 ceph-mon[74456]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2032313011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:41.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:41 compute-0 nova_compute[254880]: 2026-01-26 10:21:41.720 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:41 compute-0 nova_compute[254880]: 2026-01-26 10:21:41.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:41 compute-0 nova_compute[254880]: 2026-01-26 10:21:41.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:21:41 compute-0 nova_compute[254880]: 2026-01-26 10:21:41.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:21:41 compute-0 nova_compute[254880]: 2026-01-26 10:21:41.995 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:21:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:42 compute-0 ceph-mon[74456]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:42 compute-0 nova_compute[254880]: 2026-01-26 10:21:42.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:43 compute-0 podman[287316]: 2026-01-26 10:21:43.175351489 +0000 UTC m=+0.111414213 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 10:21:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:21:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:21:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:43.582Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:21:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:43.583Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:21:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:43.583Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:43.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:43 compute-0 nova_compute[254880]: 2026-01-26 10:21:43.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:44 compute-0 nova_compute[254880]: 2026-01-26 10:21:44.464 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:44 compute-0 ceph-mon[74456]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1538737131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:44 compute-0 nova_compute[254880]: 2026-01-26 10:21:44.953 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:44 compute-0 nova_compute[254880]: 2026-01-26 10:21:44.954 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:44 compute-0 nova_compute[254880]: 2026-01-26 10:21:44.995 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:45 compute-0 nova_compute[254880]: 2026-01-26 10:21:45.590 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:45.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2110077342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:45 compute-0 nova_compute[254880]: 2026-01-26 10:21:45.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:45 compute-0 nova_compute[254880]: 2026-01-26 10:21:45.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 10:21:45 compute-0 nova_compute[254880]: 2026-01-26 10:21:45.989 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 10:21:45 compute-0 nova_compute[254880]: 2026-01-26 10:21:45.989 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:45 compute-0 nova_compute[254880]: 2026-01-26 10:21:45.989 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 10:21:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:46] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:21:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:46] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:21:46 compute-0 sudo[287345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:21:46 compute-0 sudo[287345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:21:46 compute-0 sudo[287345]: pam_unix(sudo:session): session closed for user root
Jan 26 10:21:46 compute-0 ceph-mon[74456]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:47.216Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:21:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:47.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:47.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:21:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:21:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:21:48 compute-0 ceph-mon[74456]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1170456353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:21:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:21:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:21:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:21:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:21:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:48.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:21:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:48.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:49 compute-0 nova_compute[254880]: 2026-01-26 10:21:49.005 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:49 compute-0 nova_compute[254880]: 2026-01-26 10:21:49.005 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:21:49 compute-0 nova_compute[254880]: 2026-01-26 10:21:49.005 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:21:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:49.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:49 compute-0 nova_compute[254880]: 2026-01-26 10:21:49.467 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:21:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:49.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:21:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/737420374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:21:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:50 compute-0 nova_compute[254880]: 2026-01-26 10:21:50.592 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:50 compute-0 ceph-mon[74456]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:51.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:53 compute-0 ceph-mon[74456]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:53.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:53.584Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:53.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:54 compute-0 nova_compute[254880]: 2026-01-26 10:21:54.509 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:21:54.706 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:21:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:21:54.707 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:21:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:21:54.707 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:21:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:21:55 compute-0 ceph-mon[74456]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:55 compute-0 nova_compute[254880]: 2026-01-26 10:21:55.595 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:55.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:56] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:21:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:21:56] "GET /metrics HTTP/1.1" 200 48533 "" "Prometheus/2.51.0"
Jan 26 10:21:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:21:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:21:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:21:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:21:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:21:57 compute-0 ceph-mon[74456]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:21:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:57.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:21:58.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:21:59 compute-0 ceph-mon[74456]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3956459805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:21:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3956459805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:21:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:21:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:21:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:21:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:21:59 compute-0 nova_compute[254880]: 2026-01-26 10:21:59.514 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:21:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:21:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:21:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:21:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:21:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:00 compute-0 ceph-mon[74456]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:00 compute-0 nova_compute[254880]: 2026-01-26 10:22:00.597 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:00 compute-0 nova_compute[254880]: 2026-01-26 10:22:00.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:01 compute-0 podman[287385]: 2026-01-26 10:22:01.110436652 +0000 UTC m=+0.046584478 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 10:22:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:02 compute-0 ceph-mon[74456]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:03.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:03.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:22:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:04 compute-0 nova_compute[254880]: 2026-01-26 10:22:04.517 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:04 compute-0 ceph-mon[74456]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:05.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:05 compute-0 nova_compute[254880]: 2026-01-26 10:22:05.599 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:05.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:06 compute-0 ceph-mon[74456]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:06] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:22:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:06] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:22:06 compute-0 sudo[287409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:22:06 compute-0 sudo[287409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:06 compute-0 sudo[287409]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:07.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:07.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:07 compute-0 sshd-session[287435]: Invalid user zabbix from 157.245.76.178 port 48966
Jan 26 10:22:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:07.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:07 compute-0 sshd-session[287435]: Connection closed by invalid user zabbix 157.245.76.178 port 48966 [preauth]
Jan 26 10:22:08 compute-0 ceph-mon[74456]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:08.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:09.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:09 compute-0 nova_compute[254880]: 2026-01-26 10:22:09.542 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:09.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:10 compute-0 nova_compute[254880]: 2026-01-26 10:22:10.614 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:10 compute-0 ceph-mon[74456]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:11.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:11.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:12 compute-0 ceph-mon[74456]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:13.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:13.585Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:13.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:14 compute-0 podman[287443]: 2026-01-26 10:22:14.163047503 +0000 UTC m=+0.092709815 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 10:22:14 compute-0 nova_compute[254880]: 2026-01-26 10:22:14.543 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:14 compute-0 ceph-mon[74456]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:15 compute-0 nova_compute[254880]: 2026-01-26 10:22:15.616 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:15.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:16] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:22:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:16] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:22:16 compute-0 ceph-mon[74456]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:17.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:17.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:17.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:22:18
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.data', 'backups', '.nfs', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control']
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:22:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:22:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:22:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:22:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:18.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:18 compute-0 ceph-mon[74456]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:19.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:22:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:19 compute-0 nova_compute[254880]: 2026-01-26 10:22:19.548 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:19.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:20 compute-0 nova_compute[254880]: 2026-01-26 10:22:20.618 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:21 compute-0 ceph-mon[74456]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:21.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s
Jan 26 10:22:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:21.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:23 compute-0 ceph-mon[74456]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s
Jan 26 10:22:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:23.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 10:22:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:23.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:23.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:24 compute-0 ceph-mon[74456]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 10:22:24 compute-0 nova_compute[254880]: 2026-01-26 10:22:24.550 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:25 compute-0 sudo[287481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:22:25 compute-0 sudo[287481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:25 compute-0 sudo[287481]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:25 compute-0 sudo[287506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:22:25 compute-0 sudo[287506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 26 10:22:25 compute-0 nova_compute[254880]: 2026-01-26 10:22:25.619 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:25.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:25 compute-0 sudo[287506]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 90 op/s
Jan 26 10:22:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:22:26 compute-0 sudo[287560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:22:26 compute-0 sudo[287560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:26 compute-0 sudo[287560]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:26 compute-0 sudo[287585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:22:26 compute-0 sudo[287585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:26] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:22:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:26] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:22:26 compute-0 podman[287655]: 2026-01-26 10:22:26.629721947 +0000 UTC m=+0.026626060 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:22:26 compute-0 podman[287655]: 2026-01-26 10:22:26.758773791 +0000 UTC m=+0.155677884 container create 5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galileo, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 10:22:26 compute-0 ceph-mon[74456]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 26 10:22:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mon[74456]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 90 op/s
Jan 26 10:22:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:22:26 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:22:26 compute-0 systemd[1]: Started libpod-conmon-5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb.scope.
Jan 26 10:22:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:22:26 compute-0 sudo[287670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:22:26 compute-0 sudo[287670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:26 compute-0 sudo[287670]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:26 compute-0 podman[287655]: 2026-01-26 10:22:26.881303439 +0000 UTC m=+0.278207562 container init 5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galileo, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:22:26 compute-0 podman[287655]: 2026-01-26 10:22:26.888564675 +0000 UTC m=+0.285468778 container start 5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 10:22:26 compute-0 jolly_galileo[287695]: 167 167
Jan 26 10:22:26 compute-0 systemd[1]: libpod-5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb.scope: Deactivated successfully.
Jan 26 10:22:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:27 compute-0 podman[287655]: 2026-01-26 10:22:27.020544169 +0000 UTC m=+0.417448302 container attach 5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:22:27 compute-0 podman[287655]: 2026-01-26 10:22:27.021102543 +0000 UTC m=+0.418006666 container died 5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galileo, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 26 10:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c072e81067fdabf03753daeb89b430c98494d186aa0ad67a9ef003171cc8fc5a-merged.mount: Deactivated successfully.
Jan 26 10:22:27 compute-0 podman[287655]: 2026-01-26 10:22:27.103161419 +0000 UTC m=+0.500065522 container remove 5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galileo, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:22:27 compute-0 systemd[1]: libpod-conmon-5eb0a12032e8a1e28f4c287482927660ff56d93cb309b90f111444ecb85369eb.scope: Deactivated successfully.
Jan 26 10:22:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:27.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:27 compute-0 podman[287723]: 2026-01-26 10:22:27.289786307 +0000 UTC m=+0.060284689 container create 18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_williams, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:22:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:27.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:27 compute-0 systemd[1]: Started libpod-conmon-18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3.scope.
Jan 26 10:22:27 compute-0 podman[287723]: 2026-01-26 10:22:27.254280069 +0000 UTC m=+0.024778471 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:22:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947bd9b629d8d29f556bf2442b58b0ff929aa9e07932ad0f103bd73cf0bdc6e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947bd9b629d8d29f556bf2442b58b0ff929aa9e07932ad0f103bd73cf0bdc6e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947bd9b629d8d29f556bf2442b58b0ff929aa9e07932ad0f103bd73cf0bdc6e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947bd9b629d8d29f556bf2442b58b0ff929aa9e07932ad0f103bd73cf0bdc6e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947bd9b629d8d29f556bf2442b58b0ff929aa9e07932ad0f103bd73cf0bdc6e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:27 compute-0 podman[287723]: 2026-01-26 10:22:27.384623478 +0000 UTC m=+0.155121880 container init 18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_williams, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 26 10:22:27 compute-0 podman[287723]: 2026-01-26 10:22:27.395356727 +0000 UTC m=+0.165855109 container start 18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:22:27 compute-0 podman[287723]: 2026-01-26 10:22:27.409078387 +0000 UTC m=+0.179576789 container attach 18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_williams, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:22:27 compute-0 crazy_williams[287740]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:22:27 compute-0 crazy_williams[287740]: --> All data devices are unavailable
Jan 26 10:22:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:27.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:27 compute-0 systemd[1]: libpod-18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3.scope: Deactivated successfully.
Jan 26 10:22:27 compute-0 podman[287723]: 2026-01-26 10:22:27.745489089 +0000 UTC m=+0.515987471 container died 18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_williams, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 10:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-947bd9b629d8d29f556bf2442b58b0ff929aa9e07932ad0f103bd73cf0bdc6e6-merged.mount: Deactivated successfully.
Jan 26 10:22:27 compute-0 podman[287723]: 2026-01-26 10:22:27.905807108 +0000 UTC m=+0.676305490 container remove 18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Jan 26 10:22:27 compute-0 sudo[287585]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:27 compute-0 systemd[1]: libpod-conmon-18c843b4cb15d7e6a8fedfe8992995cc0eb14c6360976ea36582bf25a0b7d3d3.scope: Deactivated successfully.
Jan 26 10:22:28 compute-0 sudo[287766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:22:28 compute-0 sudo[287766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:28 compute-0 sudo[287766]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 90 op/s
Jan 26 10:22:28 compute-0 sudo[287793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:22:28 compute-0 sudo[287793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:28 compute-0 podman[287860]: 2026-01-26 10:22:28.495073066 +0000 UTC m=+0.068283304 container create c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 26 10:22:28 compute-0 systemd[1]: Started libpod-conmon-c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999.scope.
Jan 26 10:22:28 compute-0 podman[287860]: 2026-01-26 10:22:28.448808067 +0000 UTC m=+0.022018335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:22:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:22:28 compute-0 podman[287860]: 2026-01-26 10:22:28.60967202 +0000 UTC m=+0.182882278 container init c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:22:28 compute-0 podman[287860]: 2026-01-26 10:22:28.616315579 +0000 UTC m=+0.189525817 container start c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 26 10:22:28 compute-0 boring_torvalds[287877]: 167 167
Jan 26 10:22:28 compute-0 systemd[1]: libpod-c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999.scope: Deactivated successfully.
Jan 26 10:22:28 compute-0 podman[287860]: 2026-01-26 10:22:28.642374323 +0000 UTC m=+0.215584561 container attach c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:22:28 compute-0 podman[287860]: 2026-01-26 10:22:28.643278518 +0000 UTC m=+0.216488756 container died c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 26 10:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c37f755719ead7341b78479d18b2161bdc768cbc459f8578f56d8629f450cae-merged.mount: Deactivated successfully.
Jan 26 10:22:28 compute-0 podman[287860]: 2026-01-26 10:22:28.765510557 +0000 UTC m=+0.338720795 container remove c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:22:28 compute-0 systemd[1]: libpod-conmon-c9c7fe893d82ffec98dd3b7b9a3bbcab96da271854d9f3016fa3d9223d0bf999.scope: Deactivated successfully.
Jan 26 10:22:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:28.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:22:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:28.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:22:28 compute-0 podman[287903]: 2026-01-26 10:22:28.973688068 +0000 UTC m=+0.068109190 container create d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feynman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:22:29 compute-0 podman[287903]: 2026-01-26 10:22:28.937972643 +0000 UTC m=+0.032393785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:22:29 compute-0 systemd[1]: Started libpod-conmon-d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57.scope.
Jan 26 10:22:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199e4fd4e99ee1a778cc75a69903afc13f763e6bade0bf10dfaaeaa641b9f51e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199e4fd4e99ee1a778cc75a69903afc13f763e6bade0bf10dfaaeaa641b9f51e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199e4fd4e99ee1a778cc75a69903afc13f763e6bade0bf10dfaaeaa641b9f51e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199e4fd4e99ee1a778cc75a69903afc13f763e6bade0bf10dfaaeaa641b9f51e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:29 compute-0 podman[287903]: 2026-01-26 10:22:29.13787032 +0000 UTC m=+0.232291462 container init d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:22:29 compute-0 ceph-mon[74456]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 90 op/s
Jan 26 10:22:29 compute-0 podman[287903]: 2026-01-26 10:22:29.14787603 +0000 UTC m=+0.242297152 container start d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:22:29 compute-0 podman[287903]: 2026-01-26 10:22:29.194090497 +0000 UTC m=+0.288511649 container attach d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:22:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:22:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:29.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]: {
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:     "0": [
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:         {
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "devices": [
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "/dev/loop3"
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             ],
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "lv_name": "ceph_lv0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "lv_size": "21470642176",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "name": "ceph_lv0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "tags": {
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.cluster_name": "ceph",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.crush_device_class": "",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.encrypted": "0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.osd_id": "0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.type": "block",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.vdo": "0",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:                 "ceph.with_tpm": "0"
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             },
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "type": "block",
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:             "vg_name": "ceph_vg0"
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:         }
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]:     ]
Jan 26 10:22:29 compute-0 quizzical_feynman[287919]: }
Jan 26 10:22:29 compute-0 systemd[1]: libpod-d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57.scope: Deactivated successfully.
Jan 26 10:22:29 compute-0 podman[287903]: 2026-01-26 10:22:29.416999015 +0000 UTC m=+0.511420137 container died d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 10:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-199e4fd4e99ee1a778cc75a69903afc13f763e6bade0bf10dfaaeaa641b9f51e-merged.mount: Deactivated successfully.
Jan 26 10:22:29 compute-0 podman[287903]: 2026-01-26 10:22:29.51240664 +0000 UTC m=+0.606827762 container remove d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 10:22:29 compute-0 systemd[1]: libpod-conmon-d302d781003103105f69efdabcd3a178331425d2ad42c2f17b6f8f8a886bef57.scope: Deactivated successfully.
Jan 26 10:22:29 compute-0 nova_compute[254880]: 2026-01-26 10:22:29.601 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:29 compute-0 sudo[287793]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:29 compute-0 sudo[287940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:22:29 compute-0 sudo[287940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:29 compute-0 sudo[287940]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:22:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:29.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:22:29 compute-0 sudo[287965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:22:29 compute-0 sudo[287965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Jan 26 10:22:30 compute-0 podman[288030]: 2026-01-26 10:22:30.118861253 +0000 UTC m=+0.052863768 container create 3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:22:30 compute-0 systemd[1]: Started libpod-conmon-3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef.scope.
Jan 26 10:22:30 compute-0 podman[288030]: 2026-01-26 10:22:30.089057268 +0000 UTC m=+0.023059813 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:22:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:22:30 compute-0 podman[288030]: 2026-01-26 10:22:30.227962188 +0000 UTC m=+0.161964703 container init 3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 10:22:30 compute-0 ceph-mon[74456]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 0 B/s wr, 91 op/s
Jan 26 10:22:30 compute-0 podman[288030]: 2026-01-26 10:22:30.236793447 +0000 UTC m=+0.170795962 container start 3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:22:30 compute-0 peaceful_swirles[288046]: 167 167
Jan 26 10:22:30 compute-0 systemd[1]: libpod-3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef.scope: Deactivated successfully.
Jan 26 10:22:30 compute-0 podman[288030]: 2026-01-26 10:22:30.256008435 +0000 UTC m=+0.190010950 container attach 3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 10:22:30 compute-0 podman[288030]: 2026-01-26 10:22:30.256866989 +0000 UTC m=+0.190869514 container died 3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 10:22:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a518eb2fd2aa1ca0a923d586e51b98b5aad51104df429d3045a11b6f891358e-merged.mount: Deactivated successfully.
Jan 26 10:22:30 compute-0 podman[288030]: 2026-01-26 10:22:30.363366674 +0000 UTC m=+0.297369189 container remove 3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:22:30 compute-0 systemd[1]: libpod-conmon-3ac3c435dd40fe81939483f2943fa3b0d4e274f155b66a84b4c48dc40f9b98ef.scope: Deactivated successfully.
Jan 26 10:22:30 compute-0 podman[288074]: 2026-01-26 10:22:30.539796757 +0000 UTC m=+0.052916910 container create 7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 10:22:30 compute-0 podman[288074]: 2026-01-26 10:22:30.511153394 +0000 UTC m=+0.024273567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:22:30 compute-0 nova_compute[254880]: 2026-01-26 10:22:30.662 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:30 compute-0 systemd[1]: Started libpod-conmon-7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051.scope.
Jan 26 10:22:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc452ecfb62d8b9297ea444e4d82fa276aebb1512dbf4adbe05b274a6dcf3c55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc452ecfb62d8b9297ea444e4d82fa276aebb1512dbf4adbe05b274a6dcf3c55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc452ecfb62d8b9297ea444e4d82fa276aebb1512dbf4adbe05b274a6dcf3c55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc452ecfb62d8b9297ea444e4d82fa276aebb1512dbf4adbe05b274a6dcf3c55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:22:30 compute-0 podman[288074]: 2026-01-26 10:22:30.866083986 +0000 UTC m=+0.379204209 container init 7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:22:30 compute-0 podman[288074]: 2026-01-26 10:22:30.87735969 +0000 UTC m=+0.390479863 container start 7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hellman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 10:22:30 compute-0 podman[288074]: 2026-01-26 10:22:30.883769994 +0000 UTC m=+0.396890237 container attach 7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:22:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:31 compute-0 lvm[288172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:22:31 compute-0 lvm[288172]: VG ceph_vg0 finished
Jan 26 10:22:31 compute-0 podman[288165]: 2026-01-26 10:22:31.624504841 +0000 UTC m=+0.065155100 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:22:31 compute-0 nervous_hellman[288090]: {}
Jan 26 10:22:31 compute-0 systemd[1]: libpod-7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051.scope: Deactivated successfully.
Jan 26 10:22:31 compute-0 systemd[1]: libpod-7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051.scope: Consumed 1.251s CPU time.
Jan 26 10:22:31 compute-0 podman[288074]: 2026-01-26 10:22:31.653958256 +0000 UTC m=+1.167078419 container died 7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:22:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:31.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc452ecfb62d8b9297ea444e4d82fa276aebb1512dbf4adbe05b274a6dcf3c55-merged.mount: Deactivated successfully.
Jan 26 10:22:31 compute-0 podman[288074]: 2026-01-26 10:22:31.836072173 +0000 UTC m=+1.349192326 container remove 7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hellman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:22:31 compute-0 systemd[1]: libpod-conmon-7d2c7c1f8395547317de41cb3b5ade57273191b12ac28887293c8dbf07db7051.scope: Deactivated successfully.
Jan 26 10:22:31 compute-0 sudo[287965]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:22:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:31 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:22:31 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 0 B/s wr, 84 op/s
Jan 26 10:22:32 compute-0 sudo[288198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:22:32 compute-0 sudo[288198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:32 compute-0 sudo[288198]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:32 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:22:32 compute-0 ceph-mon[74456]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 0 B/s wr, 84 op/s
Jan 26 10:22:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:33.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:33.586Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:22:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:33.586Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:22:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:22:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 0 B/s wr, 84 op/s
Jan 26 10:22:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:34 compute-0 nova_compute[254880]: 2026-01-26 10:22:34.604 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:35 compute-0 ceph-mon[74456]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 0 B/s wr, 84 op/s
Jan 26 10:22:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:35.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:35 compute-0 nova_compute[254880]: 2026-01-26 10:22:35.702 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:35.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:22:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:36] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:22:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:36] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:22:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:37 compute-0 ceph-mon[74456]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:22:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:37.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:37.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:22:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:37.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:22:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:38 compute-0 ceph-mon[74456]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:38.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:38 compute-0 nova_compute[254880]: 2026-01-26 10:22:38.974 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.042 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.043 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.043 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.043 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.044 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:22:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:22:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:22:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:22:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384900377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.521 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:22:39 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/384900377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.607 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.661 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.663 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4408MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.663 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.663 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:22:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:39.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.759 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.759 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.872 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing inventories for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.899 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating ProviderTree inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.900 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.916 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing aggregate associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 10:22:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.941 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing trait associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 10:22:39 compute-0 nova_compute[254880]: 2026-01-26 10:22:39.963 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:22:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:22:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580551924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:40 compute-0 nova_compute[254880]: 2026-01-26 10:22:40.440 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:22:40 compute-0 nova_compute[254880]: 2026-01-26 10:22:40.444 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:22:40 compute-0 nova_compute[254880]: 2026-01-26 10:22:40.459 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:22:40 compute-0 nova_compute[254880]: 2026-01-26 10:22:40.460 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:22:40 compute-0 nova_compute[254880]: 2026-01-26 10:22:40.460 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:22:40 compute-0 ceph-mon[74456]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1580551924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:40 compute-0 nova_compute[254880]: 2026-01-26 10:22:40.733 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:22:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:41.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:22:41 compute-0 nova_compute[254880]: 2026-01-26 10:22:41.444 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:41 compute-0 nova_compute[254880]: 2026-01-26 10:22:41.445 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:41.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:43 compute-0 nova_compute[254880]: 2026-01-26 10:22:43.028 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:43 compute-0 nova_compute[254880]: 2026-01-26 10:22:43.028 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:22:43 compute-0 nova_compute[254880]: 2026-01-26 10:22:43.028 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:22:43 compute-0 nova_compute[254880]: 2026-01-26 10:22:43.044 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:22:43 compute-0 ceph-mon[74456]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:43.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:43.587Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:43.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:43 compute-0 nova_compute[254880]: 2026-01-26 10:22:43.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:43 compute-0 nova_compute[254880]: 2026-01-26 10:22:43.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:44 compute-0 ceph-mon[74456]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:44 compute-0 nova_compute[254880]: 2026-01-26 10:22:44.610 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:44 compute-0 nova_compute[254880]: 2026-01-26 10:22:44.954 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3419026471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:45 compute-0 podman[288282]: 2026-01-26 10:22:45.234023554 +0000 UTC m=+0.154254745 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:22:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:45.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:45 compute-0 nova_compute[254880]: 2026-01-26 10:22:45.734 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:45.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:45 compute-0 nova_compute[254880]: 2026-01-26 10:22:45.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1792820060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:46 compute-0 ceph-mon[74456]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:46] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:22:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:46] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:22:46 compute-0 sudo[288310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:22:46 compute-0 sudo[288310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:22:46 compute-0 sudo[288310]: pam_unix(sudo:session): session closed for user root
Jan 26 10:22:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:47.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:22:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:47.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:22:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:47.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:48 compute-0 sshd-session[288335]: Invalid user hadoop from 157.245.76.178 port 49882
Jan 26 10:22:48 compute-0 sshd-session[288335]: Connection closed by invalid user hadoop 157.245.76.178 port 49882 [preauth]
Jan 26 10:22:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:22:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:22:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:22:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:22:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:22:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:22:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:22:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:48.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:48 compute-0 nova_compute[254880]: 2026-01-26 10:22:48.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:48 compute-0 nova_compute[254880]: 2026-01-26 10:22:48.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:22:49 compute-0 ceph-mon[74456]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:22:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:49.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:49 compute-0 nova_compute[254880]: 2026-01-26 10:22:49.613 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:50 compute-0 nova_compute[254880]: 2026-01-26 10:22:50.794 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:50 compute-0 nova_compute[254880]: 2026-01-26 10:22:50.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:22:51 compute-0 ceph-mon[74456]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1151407283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:51.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3104688942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:22:53 compute-0 ceph-mon[74456]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:53.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:53.588Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:53.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:54 compute-0 nova_compute[254880]: 2026-01-26 10:22:54.617 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:22:54.707 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:22:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:22:54.707 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:22:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:22:54.707 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:22:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:22:55 compute-0 ceph-mon[74456]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:55 compute-0 nova_compute[254880]: 2026-01-26 10:22:55.849 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:56 compute-0 ceph-mon[74456]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:22:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:56] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:22:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:22:56] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:22:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:22:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:22:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:22:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:22:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:22:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:57.225Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:22:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:57.267Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:22:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:57.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:57.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:22:58.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:22:59 compute-0 ceph-mon[74456]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:22:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/2992943752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:22:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/2992943752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:22:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:22:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:22:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:22:59 compute-0 nova_compute[254880]: 2026-01-26 10:22:59.648 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:22:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:22:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:22:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:22:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:22:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 26 10:23:00 compute-0 nova_compute[254880]: 2026-01-26 10:23:00.877 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:01 compute-0 ceph-mon[74456]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 26 10:23:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:01.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 10:23:02 compute-0 podman[288351]: 2026-01-26 10:23:02.11175993 +0000 UTC m=+0.046124851 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 26 10:23:03 compute-0 ceph-mon[74456]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 10:23:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:03.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:03.590Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:23:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:03.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 10:23:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:04 compute-0 nova_compute[254880]: 2026-01-26 10:23:04.651 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:05 compute-0 ceph-mon[74456]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 10:23:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:05 compute-0 nova_compute[254880]: 2026-01-26 10:23:05.879 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 26 10:23:06 compute-0 ceph-mon[74456]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 26 10:23:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:06] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Jan 26 10:23:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:06] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Jan 26 10:23:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:07 compute-0 sudo[288376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:23:07 compute-0 sudo[288376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:07 compute-0 sudo[288376]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:07.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 10:23:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:08.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:09 compute-0 ceph-mon[74456]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 10:23:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:09 compute-0 nova_compute[254880]: 2026-01-26 10:23:09.655 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:09.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:09 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 26 10:23:10 compute-0 nova_compute[254880]: 2026-01-26 10:23:10.881 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:11 compute-0 ceph-mon[74456]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 26 10:23:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:12 compute-0 ceph-mon[74456]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:13.591Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:13.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:14 compute-0 nova_compute[254880]: 2026-01-26 10:23:14.658 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:14 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:15 compute-0 ceph-mon[74456]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:15.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:15.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:15 compute-0 nova_compute[254880]: 2026-01-26 10:23:15.882 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:16 compute-0 podman[288409]: 2026-01-26 10:23:16.185390488 +0000 UTC m=+0.109835999 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 10:23:16 compute-0 ceph-mon[74456]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:16] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Jan 26 10:23:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:16] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Jan 26 10:23:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:17.269Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:23:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:17.270Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:23:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:17.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:17.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:17.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:23:18
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.nfs', 'default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images']
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:23:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:23:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:23:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:23:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:18.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:19 compute-0 ceph-mon[74456]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:23:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:23:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:19 compute-0 nova_compute[254880]: 2026-01-26 10:23:19.659 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:19.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:19 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:20 compute-0 nova_compute[254880]: 2026-01-26 10:23:20.885 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:21 compute-0 ceph-mon[74456]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:21.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:23 compute-0 ceph-mon[74456]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:23.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:23.592Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:23:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:23.592Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:24 compute-0 nova_compute[254880]: 2026-01-26 10:23:24.663 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:24 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:25 compute-0 ceph-mon[74456]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:25 compute-0 nova_compute[254880]: 2026-01-26 10:23:25.887 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:26 compute-0 ceph-mon[74456]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:26] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:23:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:26] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:23:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:27 compute-0 sudo[288447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:23:27 compute-0 sudo[288447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:27 compute-0 sudo[288447]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:27.271Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:23:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:27.271Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:23:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:27.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:27.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:28 compute-0 sshd-session[288472]: Invalid user hadoop from 157.245.76.178 port 58970
Jan 26 10:23:28 compute-0 sshd-session[288472]: Connection closed by invalid user hadoop 157.245.76.178 port 58970 [preauth]
Jan 26 10:23:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:28.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:29 compute-0 ceph-mon[74456]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:29.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:29 compute-0 nova_compute[254880]: 2026-01-26 10:23:29.705 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:23:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:23:29 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:30 compute-0 nova_compute[254880]: 2026-01-26 10:23:30.888 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:31 compute-0 ceph-mon[74456]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:31.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:31.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:32 compute-0 sudo[288478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:23:32 compute-0 sudo[288478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:32 compute-0 sudo[288478]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:32 compute-0 sudo[288508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:23:32 compute-0 sudo[288508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:32 compute-0 podman[288502]: 2026-01-26 10:23:32.405476675 +0000 UTC m=+0.071481744 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:23:32 compute-0 ceph-mon[74456]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:33 compute-0 sudo[288508]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:23:33 compute-0 sudo[288580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:23:33 compute-0 sudo[288580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:33 compute-0 sudo[288580]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:33 compute-0 sudo[288605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:23:33 compute-0 sudo[288605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:33.593Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:23:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:33.595Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:23:33 compute-0 podman[288670]: 2026-01-26 10:23:33.68396777 +0000 UTC m=+0.045666407 container create 7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 26 10:23:33 compute-0 systemd[1]: Started libpod-conmon-7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2.scope.
Jan 26 10:23:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:23:33 compute-0 podman[288670]: 2026-01-26 10:23:33.664218142 +0000 UTC m=+0.025916849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:23:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:33 compute-0 podman[288670]: 2026-01-26 10:23:33.780676804 +0000 UTC m=+0.142375471 container init 7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_galois, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 26 10:23:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mon[74456]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:23:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:23:33 compute-0 podman[288670]: 2026-01-26 10:23:33.789013455 +0000 UTC m=+0.150712102 container start 7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 10:23:33 compute-0 podman[288670]: 2026-01-26 10:23:33.792696118 +0000 UTC m=+0.154394785 container attach 7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_galois, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:23:33 compute-0 laughing_galois[288686]: 167 167
Jan 26 10:23:33 compute-0 systemd[1]: libpod-7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2.scope: Deactivated successfully.
Jan 26 10:23:33 compute-0 podman[288670]: 2026-01-26 10:23:33.795428214 +0000 UTC m=+0.157126881 container died 7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:23:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:33.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0a72c09b27e0f6b8bd5cf882a24abf72edddec1c51e45bb450fe6b1edc66b7a-merged.mount: Deactivated successfully.
Jan 26 10:23:33 compute-0 podman[288670]: 2026-01-26 10:23:33.842418688 +0000 UTC m=+0.204117335 container remove 7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_galois, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:23:33 compute-0 systemd[1]: libpod-conmon-7b79fe47ff466c6f40cba1f1f2042a76166faeada28008a92365227dffcfbee2.scope: Deactivated successfully.
Jan 26 10:23:34 compute-0 podman[288712]: 2026-01-26 10:23:34.045134792 +0000 UTC m=+0.053467925 container create cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:23:34 compute-0 podman[288712]: 2026-01-26 10:23:34.017947838 +0000 UTC m=+0.026281021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:23:34 compute-0 systemd[1]: Started libpod-conmon-cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7.scope.
Jan 26 10:23:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf7ce4882868fe26863f9542de2e49920365e2ebcfb12c2340c18b9a2998153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf7ce4882868fe26863f9542de2e49920365e2ebcfb12c2340c18b9a2998153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf7ce4882868fe26863f9542de2e49920365e2ebcfb12c2340c18b9a2998153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf7ce4882868fe26863f9542de2e49920365e2ebcfb12c2340c18b9a2998153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf7ce4882868fe26863f9542de2e49920365e2ebcfb12c2340c18b9a2998153/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:34 compute-0 podman[288712]: 2026-01-26 10:23:34.234755864 +0000 UTC m=+0.243088997 container init cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:23:34 compute-0 podman[288712]: 2026-01-26 10:23:34.244537706 +0000 UTC m=+0.252870869 container start cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_merkle, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 10:23:34 compute-0 podman[288712]: 2026-01-26 10:23:34.248418594 +0000 UTC m=+0.256751757 container attach cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_merkle, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:23:34 compute-0 goofy_merkle[288728]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:23:34 compute-0 goofy_merkle[288728]: --> All data devices are unavailable
Jan 26 10:23:34 compute-0 systemd[1]: libpod-cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7.scope: Deactivated successfully.
Jan 26 10:23:34 compute-0 podman[288744]: 2026-01-26 10:23:34.655153699 +0000 UTC m=+0.034071526 container died cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 10:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf7ce4882868fe26863f9542de2e49920365e2ebcfb12c2340c18b9a2998153-merged.mount: Deactivated successfully.
Jan 26 10:23:34 compute-0 podman[288744]: 2026-01-26 10:23:34.702488973 +0000 UTC m=+0.081406760 container remove cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_merkle, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:23:34 compute-0 systemd[1]: libpod-conmon-cfcb809ee14d3e4687c1fe2b84f5cdf36130cd82fcc4df31913d7b3a66d98ae7.scope: Deactivated successfully.
Jan 26 10:23:34 compute-0 nova_compute[254880]: 2026-01-26 10:23:34.708 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:34 compute-0 sudo[288605]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:34 compute-0 sudo[288760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:23:34 compute-0 sudo[288760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:34 compute-0 sudo[288760]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:34 compute-0 sudo[288785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:23:34 compute-0 sudo[288785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:34 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 26 10:23:35 compute-0 podman[288853]: 2026-01-26 10:23:35.27434113 +0000 UTC m=+0.040516136 container create 1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:23:35 compute-0 systemd[1]: Started libpod-conmon-1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08.scope.
Jan 26 10:23:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:23:35 compute-0 podman[288853]: 2026-01-26 10:23:35.25740734 +0000 UTC m=+0.023582366 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:23:35 compute-0 podman[288853]: 2026-01-26 10:23:35.354928036 +0000 UTC m=+0.121103062 container init 1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 10:23:35 compute-0 podman[288853]: 2026-01-26 10:23:35.362470935 +0000 UTC m=+0.128645941 container start 1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:23:35 compute-0 sweet_dijkstra[288870]: 167 167
Jan 26 10:23:35 compute-0 podman[288853]: 2026-01-26 10:23:35.368090581 +0000 UTC m=+0.134265587 container attach 1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:23:35 compute-0 systemd[1]: libpod-1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08.scope: Deactivated successfully.
Jan 26 10:23:35 compute-0 podman[288853]: 2026-01-26 10:23:35.369754227 +0000 UTC m=+0.135929273 container died 1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_dijkstra, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b3232d23ee156b212a7f8ddab87e8daca2f12e251c0605236e86a51775725c-merged.mount: Deactivated successfully.
Jan 26 10:23:35 compute-0 podman[288853]: 2026-01-26 10:23:35.407781203 +0000 UTC m=+0.173956209 container remove 1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_dijkstra, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 26 10:23:35 compute-0 systemd[1]: libpod-conmon-1126fe916bf993d544b538e762d2c71be5eb434c7c4c80b76683b3f585325c08.scope: Deactivated successfully.
Jan 26 10:23:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:35.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:35 compute-0 podman[288894]: 2026-01-26 10:23:35.565864389 +0000 UTC m=+0.047793878 container create 9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_chatelet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 10:23:35 compute-0 systemd[1]: Started libpod-conmon-9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c.scope.
Jan 26 10:23:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:23:35 compute-0 podman[288894]: 2026-01-26 10:23:35.547663713 +0000 UTC m=+0.029593232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd046862c07b7b186f2023100a6cce5ff91a346d9ae6cb0328720428bfe3838/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd046862c07b7b186f2023100a6cce5ff91a346d9ae6cb0328720428bfe3838/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd046862c07b7b186f2023100a6cce5ff91a346d9ae6cb0328720428bfe3838/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd046862c07b7b186f2023100a6cce5ff91a346d9ae6cb0328720428bfe3838/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:35 compute-0 podman[288894]: 2026-01-26 10:23:35.661728179 +0000 UTC m=+0.143657698 container init 9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:23:35 compute-0 podman[288894]: 2026-01-26 10:23:35.667013586 +0000 UTC m=+0.148943075 container start 9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_chatelet, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 10:23:35 compute-0 podman[288894]: 2026-01-26 10:23:35.671625434 +0000 UTC m=+0.153554923 container attach 9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_chatelet, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:23:35 compute-0 ceph-mon[74456]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 26 10:23:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:35.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:35 compute-0 nova_compute[254880]: 2026-01-26 10:23:35.940 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]: {
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:     "0": [
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:         {
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "devices": [
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "/dev/loop3"
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             ],
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "lv_name": "ceph_lv0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "lv_size": "21470642176",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "name": "ceph_lv0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "tags": {
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.cluster_name": "ceph",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.crush_device_class": "",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.encrypted": "0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.osd_id": "0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.type": "block",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.vdo": "0",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:                 "ceph.with_tpm": "0"
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             },
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "type": "block",
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:             "vg_name": "ceph_vg0"
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:         }
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]:     ]
Jan 26 10:23:35 compute-0 stupefied_chatelet[288911]: }
Jan 26 10:23:35 compute-0 systemd[1]: libpod-9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c.scope: Deactivated successfully.
Jan 26 10:23:35 compute-0 podman[288894]: 2026-01-26 10:23:35.985725069 +0000 UTC m=+0.467654558 container died 9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 10:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cd046862c07b7b186f2023100a6cce5ff91a346d9ae6cb0328720428bfe3838-merged.mount: Deactivated successfully.
Jan 26 10:23:36 compute-0 podman[288894]: 2026-01-26 10:23:36.027281352 +0000 UTC m=+0.509210841 container remove 9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:23:36 compute-0 systemd[1]: libpod-conmon-9dc969e76959c0dcdcaa1ebf4d58809e39c175484e21c7d3bd986f18f158b08c.scope: Deactivated successfully.
Jan 26 10:23:36 compute-0 sudo[288785]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:36 compute-0 sudo[288931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:23:36 compute-0 sudo[288931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:36 compute-0 sudo[288931]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:36 compute-0 sudo[288956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:23:36 compute-0 sudo[288956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:36 compute-0 podman[289024]: 2026-01-26 10:23:36.621587142 +0000 UTC m=+0.062274578 container create eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:23:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:36] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:23:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:36] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:23:36 compute-0 systemd[1]: Started libpod-conmon-eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa.scope.
Jan 26 10:23:36 compute-0 podman[289024]: 2026-01-26 10:23:36.592445504 +0000 UTC m=+0.033133040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:23:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:23:36 compute-0 podman[289024]: 2026-01-26 10:23:36.708478864 +0000 UTC m=+0.149166350 container init eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mcclintock, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:23:36 compute-0 podman[289024]: 2026-01-26 10:23:36.716426074 +0000 UTC m=+0.157113510 container start eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mcclintock, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:23:36 compute-0 podman[289024]: 2026-01-26 10:23:36.721850245 +0000 UTC m=+0.162537691 container attach eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 10:23:36 compute-0 admiring_mcclintock[289040]: 167 167
Jan 26 10:23:36 compute-0 systemd[1]: libpod-eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa.scope: Deactivated successfully.
Jan 26 10:23:36 compute-0 podman[289024]: 2026-01-26 10:23:36.724043756 +0000 UTC m=+0.164731232 container died eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f683128706c2697c04b7a8dcf792798c9d02979f13273cad757465a644cd24a-merged.mount: Deactivated successfully.
Jan 26 10:23:36 compute-0 podman[289024]: 2026-01-26 10:23:36.764460578 +0000 UTC m=+0.205148014 container remove eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mcclintock, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:23:36 compute-0 systemd[1]: libpod-conmon-eb2c2e7ed4566d9bed3006b6e3b8a1cfe0f38acb9f36b058f3bc1c8e7c8978fa.scope: Deactivated successfully.
Jan 26 10:23:36 compute-0 podman[289066]: 2026-01-26 10:23:36.916840465 +0000 UTC m=+0.040542036 container create 6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:23:36 compute-0 systemd[1]: Started libpod-conmon-6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81.scope.
Jan 26 10:23:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab9f0f5440aea75490611bedb1735312d507da32d662beed83a4305a11a76d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab9f0f5440aea75490611bedb1735312d507da32d662beed83a4305a11a76d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab9f0f5440aea75490611bedb1735312d507da32d662beed83a4305a11a76d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab9f0f5440aea75490611bedb1735312d507da32d662beed83a4305a11a76d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:23:36 compute-0 podman[289066]: 2026-01-26 10:23:36.899017171 +0000 UTC m=+0.022718752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:23:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:37 compute-0 podman[289066]: 2026-01-26 10:23:37.00458597 +0000 UTC m=+0.128287561 container init 6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hopper, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:23:37 compute-0 podman[289066]: 2026-01-26 10:23:37.01107104 +0000 UTC m=+0.134772601 container start 6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hopper, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 10:23:37 compute-0 podman[289066]: 2026-01-26 10:23:37.014872385 +0000 UTC m=+0.138573946 container attach 6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:23:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:37.273Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:23:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:37.273Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:23:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:37.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:37 compute-0 lvm[289158]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:23:37 compute-0 lvm[289158]: VG ceph_vg0 finished
Jan 26 10:23:37 compute-0 gracious_hopper[289083]: {}
Jan 26 10:23:37 compute-0 systemd[1]: libpod-6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81.scope: Deactivated successfully.
Jan 26 10:23:37 compute-0 systemd[1]: libpod-6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81.scope: Consumed 1.074s CPU time.
Jan 26 10:23:37 compute-0 podman[289066]: 2026-01-26 10:23:37.719470027 +0000 UTC m=+0.843171588 container died 6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hopper, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ab9f0f5440aea75490611bedb1735312d507da32d662beed83a4305a11a76d0-merged.mount: Deactivated successfully.
Jan 26 10:23:37 compute-0 podman[289066]: 2026-01-26 10:23:37.759300762 +0000 UTC m=+0.883002323 container remove 6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:23:37 compute-0 systemd[1]: libpod-conmon-6dc0e3d0872351e03cbfd03d032f5366872a75a760a008185beddf60f50bac81.scope: Deactivated successfully.
Jan 26 10:23:37 compute-0 sudo[288956]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:23:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:37.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:37 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:37 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:23:38 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:38 compute-0 sudo[289173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:23:38 compute-0 sudo[289173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:38 compute-0 sudo[289173]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:38 compute-0 ceph-mon[74456]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:38 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:23:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:38.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:39 compute-0 ceph-mon[74456]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:39.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:39 compute-0 nova_compute[254880]: 2026-01-26 10:23:39.761 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:39.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:39 compute-0 nova_compute[254880]: 2026-01-26 10:23:39.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.066 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.066 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.066 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.066 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.067 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:23:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:23:40 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/952901761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.505 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:23:40 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/952901761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.664 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.666 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4393MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.667 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.667 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.746 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.746 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.759 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:23:40 compute-0 nova_compute[254880]: 2026-01-26 10:23:40.941 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:41 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:23:41 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780312808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:41 compute-0 nova_compute[254880]: 2026-01-26 10:23:41.205 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:23:41 compute-0 nova_compute[254880]: 2026-01-26 10:23:41.210 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:23:41 compute-0 nova_compute[254880]: 2026-01-26 10:23:41.260 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:23:41 compute-0 nova_compute[254880]: 2026-01-26 10:23:41.262 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:23:41 compute-0 nova_compute[254880]: 2026-01-26 10:23:41.262 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:23:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:41 compute-0 ceph-mon[74456]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:41 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1780312808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:41.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:43 compute-0 ceph-mon[74456]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:23:43 compute-0 nova_compute[254880]: 2026-01-26 10:23:43.261 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:43.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:43.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:43.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:43 compute-0 nova_compute[254880]: 2026-01-26 10:23:43.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:44 compute-0 nova_compute[254880]: 2026-01-26 10:23:44.765 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:44 compute-0 nova_compute[254880]: 2026-01-26 10:23:44.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:44 compute-0 nova_compute[254880]: 2026-01-26 10:23:44.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:23:44 compute-0 nova_compute[254880]: 2026-01-26 10:23:44.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:23:44 compute-0 nova_compute[254880]: 2026-01-26 10:23:44.974 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:23:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:45 compute-0 ceph-mon[74456]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:45.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:45 compute-0 nova_compute[254880]: 2026-01-26 10:23:45.943 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:45 compute-0 nova_compute[254880]: 2026-01-26 10:23:45.957 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:45 compute-0 nova_compute[254880]: 2026-01-26 10:23:45.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:45 compute-0 nova_compute[254880]: 2026-01-26 10:23:45.989 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:45 compute-0 nova_compute[254880]: 2026-01-26 10:23:45.989 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:46] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:23:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:46] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:23:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3486596600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:46 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/611560939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:47 compute-0 podman[289252]: 2026-01-26 10:23:47.158173358 +0000 UTC m=+0.087825208 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 10:23:47 compute-0 sudo[289278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:23:47 compute-0 sudo[289278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:23:47 compute-0 sudo[289278]: pam_unix(sudo:session): session closed for user root
Jan 26 10:23:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:47.275Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:47 compute-0 ceph-mon[74456]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:47.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:23:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:23:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:23:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:23:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:23:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:23:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:23:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:23:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:48.905Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:23:48 compute-0 nova_compute[254880]: 2026-01-26 10:23:48.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:48 compute-0 nova_compute[254880]: 2026-01-26 10:23:48.958 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:23:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:49.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:49 compute-0 nova_compute[254880]: 2026-01-26 10:23:49.813 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:49.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:50 compute-0 ceph-mon[74456]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:50 compute-0 nova_compute[254880]: 2026-01-26 10:23:50.943 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:50 compute-0 nova_compute[254880]: 2026-01-26 10:23:50.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:23:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:23:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:51.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:23:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:52 compute-0 ceph-mon[74456]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/909490233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4236003645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:23:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:53.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:53.596Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:54 compute-0 ceph-mon[74456]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:23:54.708 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:23:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:23:54.708 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:23:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:23:54.708 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:23:54 compute-0 nova_compute[254880]: 2026-01-26 10:23:54.816 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:23:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:55 compute-0 nova_compute[254880]: 2026-01-26 10:23:55.945 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:56 compute-0 ceph-mon[74456]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:23:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:56] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:23:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:23:56] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:23:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:23:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:23:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:23:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:23:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:23:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:57.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:57.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:57 compute-0 ceph-mon[74456]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:57.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/2050018793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:23:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/2050018793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:23:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:23:58.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:23:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:23:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:23:59 compute-0 ceph-mon[74456]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:23:59 compute-0 nova_compute[254880]: 2026-01-26 10:23:59.828 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:23:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:23:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:23:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:23:59.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:00 compute-0 nova_compute[254880]: 2026-01-26 10:24:00.945 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:02 compute-0 ceph-mon[74456]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:03 compute-0 podman[289319]: 2026-01-26 10:24:03.116058694 +0000 UTC m=+0.051478625 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 10:24:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:03.597Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:24:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:03.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:04 compute-0 ceph-mon[74456]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:04 compute-0 nova_compute[254880]: 2026-01-26 10:24:04.831 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:05 compute-0 ceph-mon[74456]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:05.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:05 compute-0 nova_compute[254880]: 2026-01-26 10:24:05.947 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:06] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:24:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:06] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:24:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:07.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:07 compute-0 sudo[289342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:24:07 compute-0 sudo[289342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:07 compute-0 sudo[289342]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:07 compute-0 ceph-mon[74456]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:07.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:07.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:08.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:09.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:09 compute-0 nova_compute[254880]: 2026-01-26 10:24:09.835 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:09.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:10 compute-0 ceph-mon[74456]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:10 compute-0 nova_compute[254880]: 2026-01-26 10:24:10.948 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:11.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:12 compute-0 ceph-mon[74456]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:13.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:13.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:13.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:14 compute-0 ceph-mon[74456]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:14 compute-0 nova_compute[254880]: 2026-01-26 10:24:14.839 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:15.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:15.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:15 compute-0 nova_compute[254880]: 2026-01-26 10:24:15.950 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:16 compute-0 ceph-mon[74456]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:16] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:24:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:16] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:24:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:17.278Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:24:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:17.278Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:17.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:18 compute-0 podman[289377]: 2026-01-26 10:24:18.184468442 +0000 UTC m=+0.099478345 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 10:24:18 compute-0 ceph-mon[74456]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:24:18
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data', '.nfs', 'images', 'default.rgw.meta', 'vms', 'default.rgw.log', 'backups']
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:24:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:24:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:24:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:24:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:18.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:24:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:24:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:19.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:19 compute-0 nova_compute[254880]: 2026-01-26 10:24:19.842 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:19.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:20 compute-0 ceph-mon[74456]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:20 compute-0 ceph-mgr[74755]: [devicehealth INFO root] Check health
Jan 26 10:24:20 compute-0 nova_compute[254880]: 2026-01-26 10:24:20.952 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:21 compute-0 ceph-mon[74456]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:24:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:24:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:21.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:23.598Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:23.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:24 compute-0 ceph-mon[74456]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:24 compute-0 nova_compute[254880]: 2026-01-26 10:24:24.845 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:25.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:25 compute-0 nova_compute[254880]: 2026-01-26 10:24:25.954 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:26 compute-0 ceph-mon[74456]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:26] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:24:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:26] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:24:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:27.279Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:27 compute-0 sudo[289414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:24:27 compute-0 sudo[289414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:27 compute-0 sudo[289414]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:27.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:27.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:28 compute-0 ceph-mon[74456]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:28.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:24:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:28.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:24:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:29.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:29 compute-0 nova_compute[254880]: 2026-01-26 10:24:29.848 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:29.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:30 compute-0 ceph-mon[74456]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:30 compute-0 nova_compute[254880]: 2026-01-26 10:24:30.956 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:31 compute-0 ceph-mon[74456]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:31.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:33.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:33.599Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:24:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:34 compute-0 podman[289445]: 2026-01-26 10:24:34.116479548 +0000 UTC m=+0.046327837 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 10:24:34 compute-0 ceph-mon[74456]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:34 compute-0 nova_compute[254880]: 2026-01-26 10:24:34.863 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:35 compute-0 ceph-mon[74456]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:35.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:35 compute-0 nova_compute[254880]: 2026-01-26 10:24:35.957 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:36] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:24:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:36] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:24:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:37.280Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:37 compute-0 ceph-mon[74456]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:37.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:37.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:38 compute-0 sudo[289469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:24:38 compute-0 sudo[289469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:38 compute-0 sudo[289469]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:38 compute-0 sudo[289494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:24:38 compute-0 sudo[289494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:38.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:38 compute-0 sudo[289494]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:24:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:24:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:24:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:24:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:24:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:24:39 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:24:39 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:39 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:39 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:24:39 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:24:39 compute-0 sudo[289554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:24:39 compute-0 sudo[289554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:39 compute-0 sudo[289554]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:39 compute-0 sudo[289579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:24:39 compute-0 sudo[289579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:39 compute-0 podman[289645]: 2026-01-26 10:24:39.685136937 +0000 UTC m=+0.044078656 container create cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euclid, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 10:24:39 compute-0 systemd[1]: Started libpod-conmon-cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029.scope.
Jan 26 10:24:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:24:39 compute-0 podman[289645]: 2026-01-26 10:24:39.666177182 +0000 UTC m=+0.025118901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:24:39 compute-0 podman[289645]: 2026-01-26 10:24:39.765002448 +0000 UTC m=+0.123944207 container init cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 10:24:39 compute-0 podman[289645]: 2026-01-26 10:24:39.773122765 +0000 UTC m=+0.132064504 container start cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:24:39 compute-0 podman[289645]: 2026-01-26 10:24:39.776211637 +0000 UTC m=+0.135153376 container attach cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euclid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:24:39 compute-0 elastic_euclid[289661]: 167 167
Jan 26 10:24:39 compute-0 systemd[1]: libpod-cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029.scope: Deactivated successfully.
Jan 26 10:24:39 compute-0 conmon[289661]: conmon cfda28af8eec28c95cd1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029.scope/container/memory.events
Jan 26 10:24:39 compute-0 podman[289645]: 2026-01-26 10:24:39.779456374 +0000 UTC m=+0.138398143 container died cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euclid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3404767d4686b659781520aa8c869b151ea15d858b1345c04b5b0018d5e3be74-merged.mount: Deactivated successfully.
Jan 26 10:24:39 compute-0 podman[289645]: 2026-01-26 10:24:39.820400876 +0000 UTC m=+0.179342605 container remove cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_euclid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:24:39 compute-0 systemd[1]: libpod-conmon-cfda28af8eec28c95cd114e8710fc4ab6902948013a4ddbcc2819adaa410c029.scope: Deactivated successfully.
Jan 26 10:24:39 compute-0 nova_compute[254880]: 2026-01-26 10:24:39.866 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:39.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:40 compute-0 podman[289685]: 2026-01-26 10:24:40.011860824 +0000 UTC m=+0.068445607 container create 934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 10:24:40 compute-0 systemd[1]: Started libpod-conmon-934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626.scope.
Jan 26 10:24:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e003db8c507809020b9edca4720805b8f9c5a2a4adf3b8360b324d4e0275c12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:40 compute-0 podman[289685]: 2026-01-26 10:24:39.988989403 +0000 UTC m=+0.045574236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e003db8c507809020b9edca4720805b8f9c5a2a4adf3b8360b324d4e0275c12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e003db8c507809020b9edca4720805b8f9c5a2a4adf3b8360b324d4e0275c12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e003db8c507809020b9edca4720805b8f9c5a2a4adf3b8360b324d4e0275c12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e003db8c507809020b9edca4720805b8f9c5a2a4adf3b8360b324d4e0275c12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:40 compute-0 podman[289685]: 2026-01-26 10:24:40.098451973 +0000 UTC m=+0.155036776 container init 934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swanson, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:24:40 compute-0 podman[289685]: 2026-01-26 10:24:40.10468565 +0000 UTC m=+0.161270433 container start 934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 26 10:24:40 compute-0 podman[289685]: 2026-01-26 10:24:40.107888325 +0000 UTC m=+0.164473138 container attach 934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:24:40 compute-0 ceph-mon[74456]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:40 compute-0 ceph-mon[74456]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:40 compute-0 strange_swanson[289702]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:24:40 compute-0 strange_swanson[289702]: --> All data devices are unavailable
Jan 26 10:24:40 compute-0 systemd[1]: libpod-934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626.scope: Deactivated successfully.
Jan 26 10:24:40 compute-0 podman[289717]: 2026-01-26 10:24:40.503127689 +0000 UTC m=+0.033836013 container died 934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e003db8c507809020b9edca4720805b8f9c5a2a4adf3b8360b324d4e0275c12-merged.mount: Deactivated successfully.
Jan 26 10:24:40 compute-0 podman[289717]: 2026-01-26 10:24:40.546086626 +0000 UTC m=+0.076794910 container remove 934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Jan 26 10:24:40 compute-0 systemd[1]: libpod-conmon-934945318e1b6e4d340d9183dae4099f10eb865fce210b01266298deb7e93626.scope: Deactivated successfully.
Jan 26 10:24:40 compute-0 sudo[289579]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:40 compute-0 sudo[289734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:24:40 compute-0 sudo[289734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:40 compute-0 sudo[289734]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:40 compute-0 sudo[289759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:24:40 compute-0 sudo[289759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:40 compute-0 nova_compute[254880]: 2026-01-26 10:24:40.957 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:41 compute-0 podman[289826]: 2026-01-26 10:24:41.093411217 +0000 UTC m=+0.036789662 container create 9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_buck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:24:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:41 compute-0 systemd[1]: Started libpod-conmon-9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516.scope.
Jan 26 10:24:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:24:41 compute-0 podman[289826]: 2026-01-26 10:24:41.167857553 +0000 UTC m=+0.111236018 container init 9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:24:41 compute-0 podman[289826]: 2026-01-26 10:24:41.174419358 +0000 UTC m=+0.117797803 container start 9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Jan 26 10:24:41 compute-0 podman[289826]: 2026-01-26 10:24:41.078876289 +0000 UTC m=+0.022254754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:24:41 compute-0 podman[289826]: 2026-01-26 10:24:41.180686925 +0000 UTC m=+0.124065370 container attach 9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_buck, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 10:24:41 compute-0 sharp_buck[289843]: 167 167
Jan 26 10:24:41 compute-0 systemd[1]: libpod-9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516.scope: Deactivated successfully.
Jan 26 10:24:41 compute-0 conmon[289843]: conmon 9786cd6708716b780f24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516.scope/container/memory.events
Jan 26 10:24:41 compute-0 podman[289826]: 2026-01-26 10:24:41.183405017 +0000 UTC m=+0.126783482 container died 9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.192909) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423081192975, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1814, "num_deletes": 251, "total_data_size": 3509274, "memory_usage": 3563576, "flush_reason": "Manual Compaction"}
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423081214960, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3423062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35214, "largest_seqno": 37026, "table_properties": {"data_size": 3414852, "index_size": 5024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16999, "raw_average_key_size": 20, "raw_value_size": 3398409, "raw_average_value_size": 4036, "num_data_blocks": 218, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769422899, "oldest_key_time": 1769422899, "file_creation_time": 1769423081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 22449 microseconds, and 12623 cpu microseconds.
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-30f26e8ce1013cf08aab18c12214b36f173bba79ac35f2c6782825278d801118-merged.mount: Deactivated successfully.
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.215359) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3423062 bytes OK
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.215484) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.217126) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.217176) EVENT_LOG_v1 {"time_micros": 1769423081217171, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.217206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3501833, prev total WAL file size 3501833, number of live WAL files 2.
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.218720) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3342KB)], [77(11MB)]
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423081218786, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15274511, "oldest_snapshot_seqno": -1}
Jan 26 10:24:41 compute-0 podman[289826]: 2026-01-26 10:24:41.229496237 +0000 UTC m=+0.172874682 container remove 9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:24:41 compute-0 systemd[1]: libpod-conmon-9786cd6708716b780f243d9ecec69e004fee25154f375361145f0937be2ee516.scope: Deactivated successfully.
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6734 keys, 13077632 bytes, temperature: kUnknown
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423081286041, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13077632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13034967, "index_size": 24697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 176863, "raw_average_key_size": 26, "raw_value_size": 12915944, "raw_average_value_size": 1918, "num_data_blocks": 970, "num_entries": 6734, "num_filter_entries": 6734, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769423081, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.286308) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13077632 bytes
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.288103) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.9 rd, 194.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 11.3 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(8.3) write-amplify(3.8) OK, records in: 7250, records dropped: 516 output_compression: NoCompression
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.288121) EVENT_LOG_v1 {"time_micros": 1769423081288113, "job": 44, "event": "compaction_finished", "compaction_time_micros": 67328, "compaction_time_cpu_micros": 27486, "output_level": 6, "num_output_files": 1, "total_output_size": 13077632, "num_input_records": 7250, "num_output_records": 6734, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423081288954, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423081291443, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.218617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.291487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.291491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.291493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.291494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:24:41 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:24:41.291496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:24:41 compute-0 podman[289867]: 2026-01-26 10:24:41.38853737 +0000 UTC m=+0.043737568 container create 1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 10:24:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=cleanup t=2026-01-26T10:24:41.420679028Z level=info msg="Completed cleanup jobs" duration=22.441579ms
Jan 26 10:24:41 compute-0 systemd[1]: Started libpod-conmon-1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89.scope.
Jan 26 10:24:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42588906e1ca3c9510d536de505a582a065cd5a0d422a4dcffb75a2e158cd6cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42588906e1ca3c9510d536de505a582a065cd5a0d422a4dcffb75a2e158cd6cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42588906e1ca3c9510d536de505a582a065cd5a0d422a4dcffb75a2e158cd6cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42588906e1ca3c9510d536de505a582a065cd5a0d422a4dcffb75a2e158cd6cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:41 compute-0 podman[289867]: 2026-01-26 10:24:41.461724402 +0000 UTC m=+0.116924640 container init 1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 26 10:24:41 compute-0 podman[289867]: 2026-01-26 10:24:41.369667806 +0000 UTC m=+0.024868024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:24:41 compute-0 podman[289867]: 2026-01-26 10:24:41.472320515 +0000 UTC m=+0.127520723 container start 1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_margulis, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:24:41 compute-0 podman[289867]: 2026-01-26 10:24:41.47660023 +0000 UTC m=+0.131800428 container attach 1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 26 10:24:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=plugins.update.checker t=2026-01-26T10:24:41.523850159Z level=info msg="Update check succeeded" duration=53.784944ms
Jan 26 10:24:41 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0[105232]: logger=grafana.update.checker t=2026-01-26T10:24:41.525179595Z level=info msg="Update check succeeded" duration=55.141921ms
Jan 26 10:24:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:41.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:41 compute-0 cranky_margulis[289883]: {
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:     "0": [
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:         {
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "devices": [
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "/dev/loop3"
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             ],
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "lv_name": "ceph_lv0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "lv_size": "21470642176",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "name": "ceph_lv0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "tags": {
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.cluster_name": "ceph",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.crush_device_class": "",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.encrypted": "0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.osd_id": "0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.type": "block",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.vdo": "0",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:                 "ceph.with_tpm": "0"
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             },
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "type": "block",
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:             "vg_name": "ceph_vg0"
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:         }
Jan 26 10:24:41 compute-0 cranky_margulis[289883]:     ]
Jan 26 10:24:41 compute-0 cranky_margulis[289883]: }
Jan 26 10:24:41 compute-0 systemd[1]: libpod-1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89.scope: Deactivated successfully.
Jan 26 10:24:41 compute-0 podman[289867]: 2026-01-26 10:24:41.764445448 +0000 UTC m=+0.419645656 container died 1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_margulis, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-42588906e1ca3c9510d536de505a582a065cd5a0d422a4dcffb75a2e158cd6cc-merged.mount: Deactivated successfully.
Jan 26 10:24:41 compute-0 podman[289867]: 2026-01-26 10:24:41.820003521 +0000 UTC m=+0.475203719 container remove 1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:24:41 compute-0 systemd[1]: libpod-conmon-1cf55713dffb231bfe0b34776f4fbf84ff8acb7c84b9206c0598f3c404f33e89.scope: Deactivated successfully.
Jan 26 10:24:41 compute-0 sudo[289759]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:41.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:41 compute-0 sudo[289906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:24:41 compute-0 sudo[289906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:41 compute-0 sudo[289906]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:41 compute-0 nova_compute[254880]: 2026-01-26 10:24:41.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:41 compute-0 sudo[289931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:24:41 compute-0 sudo[289931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.050 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.050 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.050 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.051 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.051 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:24:42 compute-0 ceph-mon[74456]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:42 compute-0 podman[290016]: 2026-01-26 10:24:42.366184642 +0000 UTC m=+0.039165876 container create 58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wozniak, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:24:42 compute-0 systemd[1]: Started libpod-conmon-58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3.scope.
Jan 26 10:24:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:24:42 compute-0 podman[290016]: 2026-01-26 10:24:42.350654757 +0000 UTC m=+0.023636011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:24:42 compute-0 podman[290016]: 2026-01-26 10:24:42.447133561 +0000 UTC m=+0.120114885 container init 58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:24:42 compute-0 podman[290016]: 2026-01-26 10:24:42.454812436 +0000 UTC m=+0.127793670 container start 58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wozniak, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:24:42 compute-0 podman[290016]: 2026-01-26 10:24:42.458743021 +0000 UTC m=+0.131724305 container attach 58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wozniak, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 10:24:42 compute-0 elastic_wozniak[290032]: 167 167
Jan 26 10:24:42 compute-0 systemd[1]: libpod-58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3.scope: Deactivated successfully.
Jan 26 10:24:42 compute-0 conmon[290032]: conmon 58fdca947231c3be64c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3.scope/container/memory.events
Jan 26 10:24:42 compute-0 podman[290016]: 2026-01-26 10:24:42.464067023 +0000 UTC m=+0.137048247 container died 58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:24:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:24:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165630013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-954e0257bb7e8c990d442ba2347cbcc1479122c215a29a0ef3aeee70b6757ed3-merged.mount: Deactivated successfully.
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.511 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:24:42 compute-0 podman[290016]: 2026-01-26 10:24:42.523638533 +0000 UTC m=+0.196619767 container remove 58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:24:42 compute-0 systemd[1]: libpod-conmon-58fdca947231c3be64c05255fda60389dde25e634c714ff156d5eb19e5f35bc3.scope: Deactivated successfully.
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.719 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.721 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4422MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.721 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.721 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:24:42 compute-0 podman[290061]: 2026-01-26 10:24:42.722477967 +0000 UTC m=+0.047950541 container create 2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_villani, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 26 10:24:42 compute-0 systemd[1]: Started libpod-conmon-2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0.scope.
Jan 26 10:24:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:24:42 compute-0 podman[290061]: 2026-01-26 10:24:42.703528881 +0000 UTC m=+0.029001475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f870d857aa86d75278bf4d0f76cd884d5db4e634f2b8d6e11dfd016f026a7668/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f870d857aa86d75278bf4d0f76cd884d5db4e634f2b8d6e11dfd016f026a7668/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f870d857aa86d75278bf4d0f76cd884d5db4e634f2b8d6e11dfd016f026a7668/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f870d857aa86d75278bf4d0f76cd884d5db4e634f2b8d6e11dfd016f026a7668/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:24:42 compute-0 podman[290061]: 2026-01-26 10:24:42.821079857 +0000 UTC m=+0.146552461 container init 2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_villani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 26 10:24:42 compute-0 podman[290061]: 2026-01-26 10:24:42.826689127 +0000 UTC m=+0.152161701 container start 2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_villani, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:24:42 compute-0 podman[290061]: 2026-01-26 10:24:42.829753098 +0000 UTC m=+0.155225702 container attach 2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_villani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.931 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:24:42 compute-0 nova_compute[254880]: 2026-01-26 10:24:42.932 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:24:43 compute-0 nova_compute[254880]: 2026-01-26 10:24:43.011 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:24:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4165630013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:24:43 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2617897557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:43 compute-0 lvm[290172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:24:43 compute-0 lvm[290172]: VG ceph_vg0 finished
Jan 26 10:24:43 compute-0 nova_compute[254880]: 2026-01-26 10:24:43.471 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:24:43 compute-0 nova_compute[254880]: 2026-01-26 10:24:43.477 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:24:43 compute-0 elated_villani[290078]: {}
Jan 26 10:24:43 compute-0 nova_compute[254880]: 2026-01-26 10:24:43.495 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:24:43 compute-0 nova_compute[254880]: 2026-01-26 10:24:43.496 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:24:43 compute-0 nova_compute[254880]: 2026-01-26 10:24:43.496 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:24:43 compute-0 systemd[1]: libpod-2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0.scope: Deactivated successfully.
Jan 26 10:24:43 compute-0 systemd[1]: libpod-2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0.scope: Consumed 1.172s CPU time.
Jan 26 10:24:43 compute-0 podman[290061]: 2026-01-26 10:24:43.515956654 +0000 UTC m=+0.841429228 container died 2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 10:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f870d857aa86d75278bf4d0f76cd884d5db4e634f2b8d6e11dfd016f026a7668-merged.mount: Deactivated successfully.
Jan 26 10:24:43 compute-0 podman[290061]: 2026-01-26 10:24:43.557720689 +0000 UTC m=+0.883193263 container remove 2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_villani, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:24:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:43.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:43 compute-0 systemd[1]: libpod-conmon-2ffff9537f93cbc625c3ebf2aaa879e51553b74f7c3b3db2c353baebe9d5b0a0.scope: Deactivated successfully.
Jan 26 10:24:43 compute-0 sudo[289931]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:24:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:43.601Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:43 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:24:43 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:43 compute-0 sudo[290188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:24:43 compute-0 sudo[290188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:43 compute-0 sudo[290188]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:43.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:44 compute-0 ceph-mon[74456]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:44 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2617897557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:44 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:44 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:24:44 compute-0 nova_compute[254880]: 2026-01-26 10:24:44.869 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:44 compute-0 nova_compute[254880]: 2026-01-26 10:24:44.941 254884 DEBUG oslo_concurrency.processutils [None req-695a35b5-cbaf-43c3-a71b-bf8c928be5ef c2f0bcfebfa24487b4079cc85d8950ce 3ff3fa2a5531460b993c609589aa545d - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:24:44 compute-0 nova_compute[254880]: 2026-01-26 10:24:44.959 254884 DEBUG oslo_concurrency.processutils [None req-695a35b5-cbaf-43c3-a71b-bf8c928be5ef c2f0bcfebfa24487b4079cc85d8950ce 3ff3fa2a5531460b993c609589aa545d - - default default] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:24:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:45 compute-0 nova_compute[254880]: 2026-01-26 10:24:45.496 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:45 compute-0 nova_compute[254880]: 2026-01-26 10:24:45.496 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:45.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:45.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:45 compute-0 nova_compute[254880]: 2026-01-26 10:24:45.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:45 compute-0 nova_compute[254880]: 2026-01-26 10:24:45.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:24:45 compute-0 nova_compute[254880]: 2026-01-26 10:24:45.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:24:45 compute-0 nova_compute[254880]: 2026-01-26 10:24:45.960 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:45 compute-0 nova_compute[254880]: 2026-01-26 10:24:45.984 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:24:46 compute-0 ceph-mon[74456]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:46] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:24:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:46] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:24:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:47.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:47.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:47 compute-0 sudo[290218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:24:47 compute-0 sudo[290218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:24:47 compute-0 sudo[290218]: pam_unix(sudo:session): session closed for user root
Jan 26 10:24:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:47 compute-0 nova_compute[254880]: 2026-01-26 10:24:47.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:47 compute-0 nova_compute[254880]: 2026-01-26 10:24:47.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:47 compute-0 nova_compute[254880]: 2026-01-26 10:24:47.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:48 compute-0 ceph-mon[74456]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1158727383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:24:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:24:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:24:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:24:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:24:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:24:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:24:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:48.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:49 compute-0 podman[290245]: 2026-01-26 10:24:49.153028336 +0000 UTC m=+0.082022148 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 10:24:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2989598300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:24:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:49.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:49 compute-0 nova_compute[254880]: 2026-01-26 10:24:49.872 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:49 compute-0 nova_compute[254880]: 2026-01-26 10:24:49.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:49 compute-0 nova_compute[254880]: 2026-01-26 10:24:49.958 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:24:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:50 compute-0 ceph-mon[74456]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:24:50 compute-0 nova_compute[254880]: 2026-01-26 10:24:50.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:24:50 compute-0 nova_compute[254880]: 2026-01-26 10:24:50.963 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:51 compute-0 ceph-mon[74456]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:51.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:51.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:52 compute-0 nova_compute[254880]: 2026-01-26 10:24:52.558 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:52 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:24:52.557 166625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '02:1d:e1', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:2d:b7:9f:32:de'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 10:24:52 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:24:52.559 166625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 10:24:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:53.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:53.602Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:54 compute-0 ceph-mon[74456]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3808329144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/11944970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:24:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:24:54.710 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:24:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:24:54.710 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:24:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:24:54.710 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:24:54 compute-0 nova_compute[254880]: 2026-01-26 10:24:54.875 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:24:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:55.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:55.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:55 compute-0 nova_compute[254880]: 2026-01-26 10:24:55.963 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:56 compute-0 ceph-mon[74456]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:24:56 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:24:56.561 166625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f90cdfa2-81a1-408b-861e-9121944637ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 10:24:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:56] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:24:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:24:56] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:24:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:24:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:24:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:24:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:24:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:24:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:57.283Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:57.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:24:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:57.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:24:58 compute-0 ceph-mon[74456]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/4159726104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:24:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/4159726104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:24:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:24:58.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:24:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:24:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:24:59.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:24:59 compute-0 nova_compute[254880]: 2026-01-26 10:24:59.878 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:24:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:24:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:24:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:24:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:00 compute-0 ceph-mon[74456]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:00 compute-0 nova_compute[254880]: 2026-01-26 10:25:00.965 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:01.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:01.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:02 compute-0 ceph-mon[74456]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:03.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:03.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:25:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:03.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:04 compute-0 ceph-mon[74456]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:04 compute-0 nova_compute[254880]: 2026-01-26 10:25:04.882 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:05 compute-0 podman[290289]: 2026-01-26 10:25:05.162823482 +0000 UTC m=+0.077461196 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 10:25:05 compute-0 ceph-mon[74456]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:05 compute-0 nova_compute[254880]: 2026-01-26 10:25:05.967 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:05.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:06] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:25:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:06] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:25:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:07.284Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:25:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:07.284Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:25:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:07.284Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:25:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:07.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:07 compute-0 sudo[290311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:25:07 compute-0 sudo[290311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:07 compute-0 sudo[290311]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:08 compute-0 ceph-mon[74456]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:08.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:25:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:09.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:25:09 compute-0 nova_compute[254880]: 2026-01-26 10:25:09.885 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:09.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:10 compute-0 ceph-mon[74456]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:10 compute-0 nova_compute[254880]: 2026-01-26 10:25:10.968 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:12 compute-0 ceph-mon[74456]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:13.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:13.603Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:14 compute-0 ceph-mon[74456]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:14 compute-0 nova_compute[254880]: 2026-01-26 10:25:14.888 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:25:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:15.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:25:15 compute-0 nova_compute[254880]: 2026-01-26 10:25:15.970 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:16 compute-0 ceph-mon[74456]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:16] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:25:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:16] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:25:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:17.285Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:17.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:25:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:25:18 compute-0 ceph-mon[74456]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:25:18
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', '.nfs', 'images', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'volumes', 'cephfs.cephfs.data']
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:25:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:25:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:25:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:25:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:18.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:25:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:25:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:19 compute-0 nova_compute[254880]: 2026-01-26 10:25:19.891 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:20 compute-0 podman[290348]: 2026-01-26 10:25:20.144933584 +0000 UTC m=+0.079900590 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 10:25:20 compute-0 ceph-mon[74456]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:20 compute-0 nova_compute[254880]: 2026-01-26 10:25:20.973 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:21 compute-0 ceph-mon[74456]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:21.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:25:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:25:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:23.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:23.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:25:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:23.604Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:25:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:23.605Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:25:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:24 compute-0 ceph-mon[74456]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:24 compute-0 nova_compute[254880]: 2026-01-26 10:25:24.895 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:25.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:25 compute-0 nova_compute[254880]: 2026-01-26 10:25:25.975 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:26.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:26 compute-0 ceph-mon[74456]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:27.286Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:27.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:27 compute-0 sudo[290382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:25:27 compute-0 sudo[290382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:27 compute-0 sudo[290382]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:28.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:28 compute-0 ceph-mon[74456]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:28.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:29.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:29 compute-0 nova_compute[254880]: 2026-01-26 10:25:29.899 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:30.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:30 compute-0 ceph-mon[74456]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:30 compute-0 nova_compute[254880]: 2026-01-26 10:25:30.976 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:31 compute-0 ceph-mon[74456]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:32.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:33.606Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:25:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:33.606Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:25:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:34 compute-0 ceph-mon[74456]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:34 compute-0 nova_compute[254880]: 2026-01-26 10:25:34.903 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:35 compute-0 nova_compute[254880]: 2026-01-26 10:25:35.978 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:36 compute-0 podman[290415]: 2026-01-26 10:25:36.138754125 +0000 UTC m=+0.063847636 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:25:36 compute-0 ceph-mon[74456]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:36] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:36] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:37.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:38.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:38 compute-0 ceph-mon[74456]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:38.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:39 compute-0 nova_compute[254880]: 2026-01-26 10:25:39.906 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:40.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:40 compute-0 ceph-mon[74456]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:40 compute-0 nova_compute[254880]: 2026-01-26 10:25:40.981 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:42.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:42 compute-0 ceph-mon[74456]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:43.607Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:43.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:43 compute-0 sudo[290443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:25:43 compute-0 sudo[290443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:43 compute-0 nova_compute[254880]: 2026-01-26 10:25:43.957 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:43 compute-0 nova_compute[254880]: 2026-01-26 10:25:43.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:43 compute-0 sudo[290443]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:43 compute-0 nova_compute[254880]: 2026-01-26 10:25:43.980 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:25:43 compute-0 nova_compute[254880]: 2026-01-26 10:25:43.980 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:25:43 compute-0 nova_compute[254880]: 2026-01-26 10:25:43.980 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:25:43 compute-0 nova_compute[254880]: 2026-01-26 10:25:43.980 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:25:43 compute-0 nova_compute[254880]: 2026-01-26 10:25:43.981 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:25:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:44.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:44 compute-0 sudo[290468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:25:44 compute-0 sudo[290468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:44 compute-0 ceph-mon[74456]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:25:44 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072745689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.475 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:25:44 compute-0 sudo[290468]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:44 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 10:25:44 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.645 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.646 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4480MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.646 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.647 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.710 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.711 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.729 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:25:44 compute-0 nova_compute[254880]: 2026-01-26 10:25:44.909 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:25:45 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490763872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:45 compute-0 nova_compute[254880]: 2026-01-26 10:25:45.205 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:25:45 compute-0 nova_compute[254880]: 2026-01-26 10:25:45.212 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:25:45 compute-0 nova_compute[254880]: 2026-01-26 10:25:45.226 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:25:45 compute-0 nova_compute[254880]: 2026-01-26 10:25:45.228 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:25:45 compute-0 nova_compute[254880]: 2026-01-26 10:25:45.228 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:25:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4072745689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:45 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 10:25:45 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/490763872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:45 compute-0 nova_compute[254880]: 2026-01-26 10:25:45.982 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:46.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:46 compute-0 nova_compute[254880]: 2026-01-26 10:25:46.230 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:46 compute-0 ceph-mon[74456]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:25:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 26 10:25:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 26 10:25:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:46] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:46] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 26 10:25:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 26 10:25:46 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:46 compute-0 nova_compute[254880]: 2026-01-26 10:25:46.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:46 compute-0 nova_compute[254880]: 2026-01-26 10:25:46.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:25:46 compute-0 nova_compute[254880]: 2026-01-26 10:25:46.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:25:46 compute-0 nova_compute[254880]: 2026-01-26 10:25:46.983 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:25:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:47.288Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 10:25:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:47 compute-0 ceph-mon[74456]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:47 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 10:25:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:47.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:25:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:25:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:25:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:25:47 compute-0 sudo[290572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:25:47 compute-0 sudo[290572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:47 compute-0 sudo[290572]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:47 compute-0 sudo[290595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:25:47 compute-0 sudo[290595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:47 compute-0 sudo[290595]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:47 compute-0 sudo[290614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:25:47 compute-0 sudo[290614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:47 compute-0 nova_compute[254880]: 2026-01-26 10:25:47.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:48.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:48 compute-0 podman[290689]: 2026-01-26 10:25:48.240822328 +0000 UTC m=+0.038295692 container create b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:25:48 compute-0 systemd[1]: Started libpod-conmon-b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15.scope.
Jan 26 10:25:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:25:48 compute-0 podman[290689]: 2026-01-26 10:25:48.223472869 +0000 UTC m=+0.020946253 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:25:48 compute-0 podman[290689]: 2026-01-26 10:25:48.327918356 +0000 UTC m=+0.125391740 container init b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:25:48 compute-0 podman[290689]: 2026-01-26 10:25:48.335361402 +0000 UTC m=+0.132834766 container start b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_dubinsky, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 26 10:25:48 compute-0 podman[290689]: 2026-01-26 10:25:48.338239628 +0000 UTC m=+0.135713012 container attach b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 10:25:48 compute-0 lucid_dubinsky[290705]: 167 167
Jan 26 10:25:48 compute-0 systemd[1]: libpod-b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15.scope: Deactivated successfully.
Jan 26 10:25:48 compute-0 conmon[290705]: conmon b127ace015074cba8b3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15.scope/container/memory.events
Jan 26 10:25:48 compute-0 podman[290689]: 2026-01-26 10:25:48.343258631 +0000 UTC m=+0.140732005 container died b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 10:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e2062d0193050c68951fcca3e70321fbbcc5c244ad087a26815fe0e6e30c9a-merged.mount: Deactivated successfully.
Jan 26 10:25:48 compute-0 podman[290689]: 2026-01-26 10:25:48.381150161 +0000 UTC m=+0.178623525 container remove b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:25:48 compute-0 systemd[1]: libpod-conmon-b127ace015074cba8b3ad1f33bd4f1dc3e256323d1d396b3d24a733642623f15.scope: Deactivated successfully.
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:25:48 compute-0 ceph-mon[74456]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:25:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1077740556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:48 compute-0 podman[290730]: 2026-01-26 10:25:48.536546712 +0000 UTC m=+0.041877436 container create a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:25:48 compute-0 systemd[1]: Started libpod-conmon-a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682.scope.
Jan 26 10:25:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa7e5ce6a3c70d1705cee3c7b11fc7926ab4d51b4e374f9268a498b77e910dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa7e5ce6a3c70d1705cee3c7b11fc7926ab4d51b4e374f9268a498b77e910dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa7e5ce6a3c70d1705cee3c7b11fc7926ab4d51b4e374f9268a498b77e910dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa7e5ce6a3c70d1705cee3c7b11fc7926ab4d51b4e374f9268a498b77e910dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa7e5ce6a3c70d1705cee3c7b11fc7926ab4d51b4e374f9268a498b77e910dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:48 compute-0 podman[290730]: 2026-01-26 10:25:48.519483782 +0000 UTC m=+0.024814526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:25:48 compute-0 podman[290730]: 2026-01-26 10:25:48.616825941 +0000 UTC m=+0.122156685 container init a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 10:25:48 compute-0 podman[290730]: 2026-01-26 10:25:48.627112732 +0000 UTC m=+0.132443456 container start a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:25:48 compute-0 podman[290730]: 2026-01-26 10:25:48.630178624 +0000 UTC m=+0.135509348 container attach a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:25:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:25:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:25:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:25:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:25:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:25:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:25:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:25:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:48.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:48 compute-0 blissful_swartz[290746]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:25:48 compute-0 blissful_swartz[290746]: --> All data devices are unavailable
Jan 26 10:25:48 compute-0 nova_compute[254880]: 2026-01-26 10:25:48.954 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:48 compute-0 nova_compute[254880]: 2026-01-26 10:25:48.977 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:48 compute-0 systemd[1]: libpod-a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682.scope: Deactivated successfully.
Jan 26 10:25:48 compute-0 podman[290730]: 2026-01-26 10:25:48.98845431 +0000 UTC m=+0.493785064 container died a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa7e5ce6a3c70d1705cee3c7b11fc7926ab4d51b4e374f9268a498b77e910dc-merged.mount: Deactivated successfully.
Jan 26 10:25:49 compute-0 podman[290730]: 2026-01-26 10:25:49.033473208 +0000 UTC m=+0.538803932 container remove a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_swartz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:25:49 compute-0 systemd[1]: libpod-conmon-a8eff54f887da63129d11fb6da98a7b03c8fa2e8e60daa93a82afec2617e5682.scope: Deactivated successfully.
Jan 26 10:25:49 compute-0 sudo[290614]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:49 compute-0 sudo[290776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:25:49 compute-0 sudo[290776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:49 compute-0 sudo[290776]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:49 compute-0 sudo[290801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:25:49 compute-0 sudo[290801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:25:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3929538449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:49 compute-0 podman[290866]: 2026-01-26 10:25:49.624945537 +0000 UTC m=+0.039973695 container create d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 26 10:25:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:49.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:49 compute-0 systemd[1]: Started libpod-conmon-d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098.scope.
Jan 26 10:25:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:25:49 compute-0 podman[290866]: 2026-01-26 10:25:49.687282183 +0000 UTC m=+0.102310371 container init d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:25:49 compute-0 podman[290866]: 2026-01-26 10:25:49.693760483 +0000 UTC m=+0.108788641 container start d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 10:25:49 compute-0 sharp_cerf[290882]: 167 167
Jan 26 10:25:49 compute-0 podman[290866]: 2026-01-26 10:25:49.698031666 +0000 UTC m=+0.113059824 container attach d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Jan 26 10:25:49 compute-0 systemd[1]: libpod-d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098.scope: Deactivated successfully.
Jan 26 10:25:49 compute-0 podman[290866]: 2026-01-26 10:25:49.698703754 +0000 UTC m=+0.113731922 container died d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cerf, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:25:49 compute-0 podman[290866]: 2026-01-26 10:25:49.607553069 +0000 UTC m=+0.022581247 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:25:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-310fb16540563a1623a73e087a124d14f5a3ff2ae05476857fe6496c63a7eb19-merged.mount: Deactivated successfully.
Jan 26 10:25:49 compute-0 podman[290866]: 2026-01-26 10:25:49.731382246 +0000 UTC m=+0.146410405 container remove d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_cerf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:25:49 compute-0 systemd[1]: libpod-conmon-d4832f9d79c520046d730bdfd92620db1e9bf4798c13144f4cc5cf2d73e60098.scope: Deactivated successfully.
Jan 26 10:25:49 compute-0 podman[290904]: 2026-01-26 10:25:49.891172364 +0000 UTC m=+0.044861735 container create 99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 26 10:25:49 compute-0 nova_compute[254880]: 2026-01-26 10:25:49.933 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:49 compute-0 systemd[1]: Started libpod-conmon-99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228.scope.
Jan 26 10:25:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:25:49 compute-0 podman[290904]: 2026-01-26 10:25:49.866833052 +0000 UTC m=+0.020522463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4430f5e1daf0a506122cf1cd2ad0a7152cdfa7f00c46f66b564400946843f845/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4430f5e1daf0a506122cf1cd2ad0a7152cdfa7f00c46f66b564400946843f845/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4430f5e1daf0a506122cf1cd2ad0a7152cdfa7f00c46f66b564400946843f845/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4430f5e1daf0a506122cf1cd2ad0a7152cdfa7f00c46f66b564400946843f845/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:49 compute-0 podman[290904]: 2026-01-26 10:25:49.976066505 +0000 UTC m=+0.129755896 container init 99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 10:25:49 compute-0 nova_compute[254880]: 2026-01-26 10:25:49.976 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:49 compute-0 podman[290904]: 2026-01-26 10:25:49.983526501 +0000 UTC m=+0.137215872 container start 99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:25:49 compute-0 podman[290904]: 2026-01-26 10:25:49.986605422 +0000 UTC m=+0.140294793 container attach 99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldstine, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 26 10:25:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:50.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]: {
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:     "0": [
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:         {
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "devices": [
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "/dev/loop3"
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             ],
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "lv_name": "ceph_lv0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "lv_size": "21470642176",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "name": "ceph_lv0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "tags": {
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.cluster_name": "ceph",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.crush_device_class": "",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.encrypted": "0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.osd_id": "0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.type": "block",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.vdo": "0",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:                 "ceph.with_tpm": "0"
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             },
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "type": "block",
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:             "vg_name": "ceph_vg0"
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:         }
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]:     ]
Jan 26 10:25:50 compute-0 hardcore_goldstine[290920]: }
Jan 26 10:25:50 compute-0 podman[290904]: 2026-01-26 10:25:50.275167008 +0000 UTC m=+0.428856379 container died 99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:25:50 compute-0 systemd[1]: libpod-99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228.scope: Deactivated successfully.
Jan 26 10:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4430f5e1daf0a506122cf1cd2ad0a7152cdfa7f00c46f66b564400946843f845-merged.mount: Deactivated successfully.
Jan 26 10:25:50 compute-0 podman[290904]: 2026-01-26 10:25:50.320487164 +0000 UTC m=+0.474176535 container remove 99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_goldstine, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:25:50 compute-0 systemd[1]: libpod-conmon-99a0b980c3a2daf8a3ef3221911e063c668938344ed4f159a19125480eb7e228.scope: Deactivated successfully.
Jan 26 10:25:50 compute-0 sudo[290801]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:50 compute-0 podman[290930]: 2026-01-26 10:25:50.401219286 +0000 UTC m=+0.089784892 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 26 10:25:50 compute-0 sudo[290964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:25:50 compute-0 sudo[290964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:50 compute-0 sudo[290964]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:50 compute-0 sudo[290994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:25:50 compute-0 sudo[290994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:50 compute-0 ceph-mon[74456]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:50 compute-0 podman[291062]: 2026-01-26 10:25:50.858120304 +0000 UTC m=+0.052488287 container create abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:25:50 compute-0 systemd[1]: Started libpod-conmon-abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef.scope.
Jan 26 10:25:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:25:50 compute-0 podman[291062]: 2026-01-26 10:25:50.836993116 +0000 UTC m=+0.031361149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:25:50 compute-0 podman[291062]: 2026-01-26 10:25:50.93376493 +0000 UTC m=+0.128132943 container init abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_torvalds, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 10:25:50 compute-0 podman[291062]: 2026-01-26 10:25:50.940780835 +0000 UTC m=+0.135148818 container start abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_torvalds, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:25:50 compute-0 podman[291062]: 2026-01-26 10:25:50.943945469 +0000 UTC m=+0.138313452 container attach abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 10:25:50 compute-0 busy_torvalds[291079]: 167 167
Jan 26 10:25:50 compute-0 systemd[1]: libpod-abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef.scope: Deactivated successfully.
Jan 26 10:25:50 compute-0 conmon[291079]: conmon abfa40a518f95c9e84bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef.scope/container/memory.events
Jan 26 10:25:50 compute-0 podman[291062]: 2026-01-26 10:25:50.947287008 +0000 UTC m=+0.141654991 container died abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_torvalds, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:25:50 compute-0 nova_compute[254880]: 2026-01-26 10:25:50.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:50 compute-0 nova_compute[254880]: 2026-01-26 10:25:50.960 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b06a10b1cdb4e4f010395b7f0267cddc65d948f88d4b7971039122a8fc55d24-merged.mount: Deactivated successfully.
Jan 26 10:25:50 compute-0 podman[291062]: 2026-01-26 10:25:50.983860523 +0000 UTC m=+0.178228506 container remove abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 10:25:51 compute-0 nova_compute[254880]: 2026-01-26 10:25:51.026 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:51 compute-0 systemd[1]: libpod-conmon-abfa40a518f95c9e84bcf4aef8d48292e1c56e903ac85dd7bd16cfb862a271ef.scope: Deactivated successfully.
Jan 26 10:25:51 compute-0 podman[291101]: 2026-01-26 10:25:51.179455414 +0000 UTC m=+0.042300267 container create 6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 10:25:51 compute-0 systemd[1]: Started libpod-conmon-6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59.scope.
Jan 26 10:25:51 compute-0 podman[291101]: 2026-01-26 10:25:51.162381534 +0000 UTC m=+0.025226397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:25:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3aff917d4f0fd86c4b464bc089885a9c447973888e9a0ffdc894ba3f828a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3aff917d4f0fd86c4b464bc089885a9c447973888e9a0ffdc894ba3f828a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3aff917d4f0fd86c4b464bc089885a9c447973888e9a0ffdc894ba3f828a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3aff917d4f0fd86c4b464bc089885a9c447973888e9a0ffdc894ba3f828a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:25:51 compute-0 podman[291101]: 2026-01-26 10:25:51.283994524 +0000 UTC m=+0.146839417 container init 6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 10:25:51 compute-0 podman[291101]: 2026-01-26 10:25:51.294702346 +0000 UTC m=+0.157547189 container start 6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 10:25:51 compute-0 podman[291101]: 2026-01-26 10:25:51.29789369 +0000 UTC m=+0.160738553 container attach 6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 10:25:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:51.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:51 compute-0 lvm[291192]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:25:51 compute-0 lvm[291192]: VG ceph_vg0 finished
Jan 26 10:25:51 compute-0 optimistic_carver[291117]: {}
Jan 26 10:25:51 compute-0 systemd[1]: libpod-6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59.scope: Deactivated successfully.
Jan 26 10:25:51 compute-0 systemd[1]: libpod-6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59.scope: Consumed 1.137s CPU time.
Jan 26 10:25:51 compute-0 podman[291101]: 2026-01-26 10:25:51.98822158 +0000 UTC m=+0.851066423 container died 6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:25:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e3aff917d4f0fd86c4b464bc089885a9c447973888e9a0ffdc894ba3f828a11-merged.mount: Deactivated successfully.
Jan 26 10:25:52 compute-0 podman[291101]: 2026-01-26 10:25:52.027872416 +0000 UTC m=+0.890717259 container remove 6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Jan 26 10:25:52 compute-0 systemd[1]: libpod-conmon-6140f78bc09700863a8e8ede122c8b6ab5d330b9e3842e35415539b2e3932c59.scope: Deactivated successfully.
Jan 26 10:25:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:52.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:52 compute-0 sudo[290994]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:25:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:25:52 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:52 compute-0 sudo[291208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:25:52 compute-0 sudo[291208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:25:52 compute-0 sudo[291208]: pam_unix(sudo:session): session closed for user root
Jan 26 10:25:52 compute-0 nova_compute[254880]: 2026-01-26 10:25:52.960 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:25:53 compute-0 ceph-mon[74456]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:25:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:53.608Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:53.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:54.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:25:54.712 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:25:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:25:54.712 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:25:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:25:54.712 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:25:54 compute-0 nova_compute[254880]: 2026-01-26 10:25:54.938 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:25:55 compute-0 ceph-mon[74456]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:55.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:56 compute-0 nova_compute[254880]: 2026-01-26 10:25:56.027 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:25:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:56.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2985707705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2820156879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:25:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:56] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:25:56] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:25:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:25:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:25:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:25:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:25:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:25:57 compute-0 ceph-mon[74456]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:57.289Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:57.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:25:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:25:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:25:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1247078311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:25:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/1247078311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:25:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:25:58.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:25:59 compute-0 ceph-mon[74456]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 26 10:25:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:25:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:25:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:25:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:25:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:25:59 compute-0 nova_compute[254880]: 2026-01-26 10:25:59.941 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:00.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:01 compute-0 nova_compute[254880]: 2026-01-26 10:26:01.028 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:01 compute-0 ceph-mon[74456]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:01.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:26:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:02.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:26:03 compute-0 ceph-mon[74456]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:03.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:26:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:03.609Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:26:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:03.609Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:26:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:26:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:04.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:04 compute-0 nova_compute[254880]: 2026-01-26 10:26:04.945 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:05 compute-0 ceph-mon[74456]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:05.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:06 compute-0 nova_compute[254880]: 2026-01-26 10:26:06.030 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:06.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:06] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:26:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:06] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:26:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:07 compute-0 podman[291249]: 2026-01-26 10:26:07.128096076 +0000 UTC m=+0.060286012 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 10:26:07 compute-0 ceph-mon[74456]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:07.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:07.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:07 compute-0 sudo[291269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:26:07 compute-0 sudo[291269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:07 compute-0 sudo[291269]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:08.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:08 compute-0 ceph-mon[74456]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:08.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:26:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:09.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:26:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:09 compute-0 nova_compute[254880]: 2026-01-26 10:26:09.949 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:10.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:10 compute-0 ceph-mon[74456]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:11 compute-0 nova_compute[254880]: 2026-01-26 10:26:11.032 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:11.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:12.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:12 compute-0 ceph-mon[74456]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:13.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:13.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:14.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:14 compute-0 ceph-mon[74456]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:14 compute-0 nova_compute[254880]: 2026-01-26 10:26:14.951 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:16 compute-0 nova_compute[254880]: 2026-01-26 10:26:16.035 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:26:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:16.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:26:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:16] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:26:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:16] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:26:16 compute-0 ceph-mon[74456]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:17.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:18.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:26:18
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.nfs', 'vms', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:26:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:26:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:26:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:26:18 compute-0 ceph-mon[74456]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:18.922Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:26:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:19 compute-0 nova_compute[254880]: 2026-01-26 10:26:19.975 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:20.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:20 compute-0 ceph-mon[74456]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:21 compute-0 nova_compute[254880]: 2026-01-26 10:26:21.068 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:21 compute-0 podman[291308]: 2026-01-26 10:26:21.178821166 +0000 UTC m=+0.082394465 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:26:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:26:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:22.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:26:22 compute-0 ceph-mon[74456]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:23.610Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:23.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:24.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:24 compute-0 ceph-mon[74456]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:24 compute-0 nova_compute[254880]: 2026-01-26 10:26:24.978 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:26 compute-0 nova_compute[254880]: 2026-01-26 10:26:26.070 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:26:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:26.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:26:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:26] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:26:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:26] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:26:26 compute-0 ceph-mon[74456]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:27.291Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:27 compute-0 sudo[291341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:26:28 compute-0 sudo[291341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:28 compute-0 sudo[291341]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:28.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:28.923Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:26:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:28.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:28 compute-0 ceph-mon[74456]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:29.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:30 compute-0 nova_compute[254880]: 2026-01-26 10:26:30.028 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:30.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:30 compute-0 ceph-mon[74456]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:31 compute-0 nova_compute[254880]: 2026-01-26 10:26:31.095 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:26:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:31.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:26:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:32.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:32 compute-0 ceph-mon[74456]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:33.611Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:33.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:26:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:33 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:34.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:34 compute-0 ceph-mon[74456]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:35 compute-0 nova_compute[254880]: 2026-01-26 10:26:35.069 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:35.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:36.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:36 compute-0 nova_compute[254880]: 2026-01-26 10:26:36.125 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:36] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:26:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:36] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:26:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:37 compute-0 ceph-mon[74456]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:37.292Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:38 compute-0 podman[291376]: 2026-01-26 10:26:38.110424716 +0000 UTC m=+0.047334671 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 10:26:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:38.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:38.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:26:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:38.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:26:39 compute-0 ceph-mon[74456]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:39.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:40 compute-0 nova_compute[254880]: 2026-01-26 10:26:40.071 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:41 compute-0 ceph-mon[74456]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:41 compute-0 nova_compute[254880]: 2026-01-26 10:26:41.164 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:41.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:42.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:43 compute-0 ceph-mon[74456]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:43.612Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:43.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:44.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:44 compute-0 nova_compute[254880]: 2026-01-26 10:26:44.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:45 compute-0 ceph-mon[74456]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:45 compute-0 nova_compute[254880]: 2026-01-26 10:26:45.074 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:45.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:45 compute-0 nova_compute[254880]: 2026-01-26 10:26:45.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:45 compute-0 nova_compute[254880]: 2026-01-26 10:26:45.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.057 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.057 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.058 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.058 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.058 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:26:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:46.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.165 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:46 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:26:46 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881061805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.542 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:26:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:46] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:26:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:46] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.709 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.711 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4512MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.711 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.711 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.797 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.797 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:26:46 compute-0 nova_compute[254880]: 2026-01-26 10:26:46.997 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:26:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:47 compute-0 ceph-mon[74456]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1881061805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:47.293Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:26:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218141819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:47 compute-0 nova_compute[254880]: 2026-01-26 10:26:47.502 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:26:47 compute-0 nova_compute[254880]: 2026-01-26 10:26:47.507 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:26:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:48 compute-0 sudo[291450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:26:48 compute-0 sudo[291450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:48 compute-0 sudo[291450]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:48.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1218141819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:48 compute-0 nova_compute[254880]: 2026-01-26 10:26:48.693 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:26:48 compute-0 nova_compute[254880]: 2026-01-26 10:26:48.695 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:26:48 compute-0 nova_compute[254880]: 2026-01-26 10:26:48.695 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:26:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:26:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:26:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:26:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:26:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:26:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:26:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:26:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:48.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:49 compute-0 ceph-mon[74456]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:26:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3868117298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:49.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:49 compute-0 nova_compute[254880]: 2026-01-26 10:26:49.696 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:49 compute-0 nova_compute[254880]: 2026-01-26 10:26:49.696 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:26:49 compute-0 nova_compute[254880]: 2026-01-26 10:26:49.697 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:26:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:49 compute-0 nova_compute[254880]: 2026-01-26 10:26:49.736 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:26:49 compute-0 nova_compute[254880]: 2026-01-26 10:26:49.737 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:49 compute-0 nova_compute[254880]: 2026-01-26 10:26:49.957 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:49 compute-0 nova_compute[254880]: 2026-01-26 10:26:49.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:50 compute-0 nova_compute[254880]: 2026-01-26 10:26:50.105 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:50.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:50 compute-0 ceph-mon[74456]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:26:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3891417777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:50 compute-0 nova_compute[254880]: 2026-01-26 10:26:50.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:50 compute-0 nova_compute[254880]: 2026-01-26 10:26:50.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:26:51 compute-0 nova_compute[254880]: 2026-01-26 10:26:51.218 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:51.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:52.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:52 compute-0 podman[291479]: 2026-01-26 10:26:52.172128417 +0000 UTC m=+0.097189736 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 26 10:26:52 compute-0 sudo[291505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:26:52 compute-0 sudo[291505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:52 compute-0 sudo[291505]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:52 compute-0 sudo[291530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:26:52 compute-0 sudo[291530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:52 compute-0 ceph-mon[74456]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:26:52 compute-0 sudo[291530]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:26:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:26:53 compute-0 sudo[291589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:26:53 compute-0 sudo[291589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:53 compute-0 sudo[291589]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:53 compute-0 sudo[291614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:26:53 compute-0 sudo[291614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:53.613Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:53.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:53 compute-0 podman[291681]: 2026-01-26 10:26:53.85111589 +0000 UTC m=+0.039871013 container create 350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 10:26:53 compute-0 systemd[1]: Started libpod-conmon-350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8.scope.
Jan 26 10:26:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:26:53 compute-0 podman[291681]: 2026-01-26 10:26:53.832102869 +0000 UTC m=+0.020858002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:26:53 compute-0 podman[291681]: 2026-01-26 10:26:53.942347947 +0000 UTC m=+0.131103070 container init 350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lehmann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:26:53 compute-0 podman[291681]: 2026-01-26 10:26:53.950719368 +0000 UTC m=+0.139474471 container start 350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:26:53 compute-0 podman[291681]: 2026-01-26 10:26:53.954020386 +0000 UTC m=+0.142775499 container attach 350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lehmann, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:26:53 compute-0 magical_lehmann[291697]: 167 167
Jan 26 10:26:53 compute-0 systemd[1]: libpod-350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8.scope: Deactivated successfully.
Jan 26 10:26:53 compute-0 podman[291681]: 2026-01-26 10:26:53.958929695 +0000 UTC m=+0.147684798 container died 350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lehmann, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:26:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:26:53 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-10d9004ef1d0e27bc3fd01b32eb26592bbb5a9b4e10516a7b006202d4a8d4964-merged.mount: Deactivated successfully.
Jan 26 10:26:54 compute-0 podman[291681]: 2026-01-26 10:26:54.002598397 +0000 UTC m=+0.191353500 container remove 350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_lehmann, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:26:54 compute-0 systemd[1]: libpod-conmon-350cf50e88ee48a2ccbf17078db8832bb35ed4687863907621439bdd7fa9b7f8.scope: Deactivated successfully.
Jan 26 10:26:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 10:26:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:54.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 10:26:54 compute-0 podman[291724]: 2026-01-26 10:26:54.186575874 +0000 UTC m=+0.049595011 container create bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:26:54 compute-0 systemd[1]: Started libpod-conmon-bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852.scope.
Jan 26 10:26:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80380b4fceb19242401cc0d228c3eea7ba057255a9f99969ca5b911eb9819c01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80380b4fceb19242401cc0d228c3eea7ba057255a9f99969ca5b911eb9819c01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80380b4fceb19242401cc0d228c3eea7ba057255a9f99969ca5b911eb9819c01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80380b4fceb19242401cc0d228c3eea7ba057255a9f99969ca5b911eb9819c01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80380b4fceb19242401cc0d228c3eea7ba057255a9f99969ca5b911eb9819c01/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:54 compute-0 podman[291724]: 2026-01-26 10:26:54.164887381 +0000 UTC m=+0.027906568 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:26:54 compute-0 podman[291724]: 2026-01-26 10:26:54.283199724 +0000 UTC m=+0.146218891 container init bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_hugle, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 10:26:54 compute-0 podman[291724]: 2026-01-26 10:26:54.290052105 +0000 UTC m=+0.153071242 container start bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 26 10:26:54 compute-0 podman[291724]: 2026-01-26 10:26:54.309304452 +0000 UTC m=+0.172323599 container attach bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 26 10:26:54 compute-0 hopeful_hugle[291740]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:26:54 compute-0 hopeful_hugle[291740]: --> All data devices are unavailable
Jan 26 10:26:54 compute-0 systemd[1]: libpod-bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852.scope: Deactivated successfully.
Jan 26 10:26:54 compute-0 podman[291724]: 2026-01-26 10:26:54.634951257 +0000 UTC m=+0.497970394 container died bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-80380b4fceb19242401cc0d228c3eea7ba057255a9f99969ca5b911eb9819c01-merged.mount: Deactivated successfully.
Jan 26 10:26:54 compute-0 podman[291724]: 2026-01-26 10:26:54.674885902 +0000 UTC m=+0.537905039 container remove bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:26:54 compute-0 systemd[1]: libpod-conmon-bbb17ad8980b6d8dfae64e5181c4d12e1ee1c3c6996a9fa88aafb2b8bb67c852.scope: Deactivated successfully.
Jan 26 10:26:54 compute-0 sudo[291614]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:26:54.713 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:26:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:26:54.717 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:26:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:26:54.717 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:26:54 compute-0 sudo[291770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:26:54 compute-0 sudo[291770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:54 compute-0 sudo[291770]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:54 compute-0 sudo[291796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:26:54 compute-0 sudo[291796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:54 compute-0 nova_compute[254880]: 2026-01-26 10:26:54.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:54 compute-0 ceph-mon[74456]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:26:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.100600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423215100681, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1434, "num_deletes": 257, "total_data_size": 2666359, "memory_usage": 2700024, "flush_reason": "Manual Compaction"}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 26 10:26:55 compute-0 nova_compute[254880]: 2026-01-26 10:26:55.107 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423215114430, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2599346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37027, "largest_seqno": 38460, "table_properties": {"data_size": 2592642, "index_size": 3839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14234, "raw_average_key_size": 19, "raw_value_size": 2579060, "raw_average_value_size": 3612, "num_data_blocks": 165, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769423082, "oldest_key_time": 1769423082, "file_creation_time": 1769423215, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 13897 microseconds, and 5494 cpu microseconds.
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.114493) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2599346 bytes OK
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.114516) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.116588) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.116599) EVENT_LOG_v1 {"time_micros": 1769423215116596, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.116616) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2660168, prev total WAL file size 2660168, number of live WAL files 2.
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.117492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303031' seq:72057594037927935, type:22 .. '6C6F676D0031323534' seq:0, type:0; will stop at (end)
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2538KB)], [80(12MB)]
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423215117520, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15676978, "oldest_snapshot_seqno": -1}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6916 keys, 15512738 bytes, temperature: kUnknown
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423215187472, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15512738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15466418, "index_size": 27904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 181618, "raw_average_key_size": 26, "raw_value_size": 15341745, "raw_average_value_size": 2218, "num_data_blocks": 1103, "num_entries": 6916, "num_filter_entries": 6916, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769423215, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.187690) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15512738 bytes
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.189490) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.9 rd, 221.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.5 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 7448, records dropped: 532 output_compression: NoCompression
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.189504) EVENT_LOG_v1 {"time_micros": 1769423215189497, "job": 46, "event": "compaction_finished", "compaction_time_micros": 70020, "compaction_time_cpu_micros": 29043, "output_level": 6, "num_output_files": 1, "total_output_size": 15512738, "num_input_records": 7448, "num_output_records": 6916, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423215190048, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423215192416, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.117396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.192491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.192495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.192497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.192498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:26:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:26:55.192500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:26:55 compute-0 podman[291862]: 2026-01-26 10:26:55.202580058 +0000 UTC m=+0.037659754 container create f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:26:55 compute-0 systemd[1]: Started libpod-conmon-f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8.scope.
Jan 26 10:26:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:26:55 compute-0 podman[291862]: 2026-01-26 10:26:55.271369324 +0000 UTC m=+0.106449030 container init f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bhaskara, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:26:55 compute-0 podman[291862]: 2026-01-26 10:26:55.277157757 +0000 UTC m=+0.112237453 container start f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bhaskara, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:26:55 compute-0 podman[291862]: 2026-01-26 10:26:55.280064593 +0000 UTC m=+0.115144319 container attach f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 26 10:26:55 compute-0 gallant_bhaskara[291879]: 167 167
Jan 26 10:26:55 compute-0 systemd[1]: libpod-f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8.scope: Deactivated successfully.
Jan 26 10:26:55 compute-0 podman[291862]: 2026-01-26 10:26:55.281346377 +0000 UTC m=+0.116426083 container died f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 26 10:26:55 compute-0 podman[291862]: 2026-01-26 10:26:55.186776641 +0000 UTC m=+0.021856367 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ca83f01ee9aaf8b162c05d947054ab939ffd4b3822d48e9cc7d84b24fcd4e1d-merged.mount: Deactivated successfully.
Jan 26 10:26:55 compute-0 podman[291862]: 2026-01-26 10:26:55.31707121 +0000 UTC m=+0.152150906 container remove f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_bhaskara, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:26:55 compute-0 systemd[1]: libpod-conmon-f1690a0d5d8223e9a2eb22e42274cc009c3fbd670c2eca7416af475917278dc8.scope: Deactivated successfully.
Jan 26 10:26:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 26 10:26:55 compute-0 podman[291901]: 2026-01-26 10:26:55.464888551 +0000 UTC m=+0.038890767 container create e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 10:26:55 compute-0 systemd[1]: Started libpod-conmon-e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937.scope.
Jan 26 10:26:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb425f251368ee66b0a54ca451152ffb7b2502a2e2e30416fd05181a2767c18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb425f251368ee66b0a54ca451152ffb7b2502a2e2e30416fd05181a2767c18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb425f251368ee66b0a54ca451152ffb7b2502a2e2e30416fd05181a2767c18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb425f251368ee66b0a54ca451152ffb7b2502a2e2e30416fd05181a2767c18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:55 compute-0 podman[291901]: 2026-01-26 10:26:55.537813036 +0000 UTC m=+0.111815272 container init e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 10:26:55 compute-0 podman[291901]: 2026-01-26 10:26:55.448870438 +0000 UTC m=+0.022872674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:26:55 compute-0 podman[291901]: 2026-01-26 10:26:55.544205404 +0000 UTC m=+0.118207620 container start e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Jan 26 10:26:55 compute-0 podman[291901]: 2026-01-26 10:26:55.547371418 +0000 UTC m=+0.121373634 container attach e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:26:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:26:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]: {
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:     "0": [
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:         {
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "devices": [
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "/dev/loop3"
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             ],
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "lv_name": "ceph_lv0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "lv_size": "21470642176",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "name": "ceph_lv0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "tags": {
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.cluster_name": "ceph",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.crush_device_class": "",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.encrypted": "0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.osd_id": "0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.type": "block",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.vdo": "0",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:                 "ceph.with_tpm": "0"
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             },
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "type": "block",
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:             "vg_name": "ceph_vg0"
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:         }
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]:     ]
Jan 26 10:26:55 compute-0 friendly_wilbur[291917]: }
Jan 26 10:26:55 compute-0 systemd[1]: libpod-e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937.scope: Deactivated successfully.
Jan 26 10:26:55 compute-0 podman[291901]: 2026-01-26 10:26:55.839900269 +0000 UTC m=+0.413902485 container died e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb425f251368ee66b0a54ca451152ffb7b2502a2e2e30416fd05181a2767c18-merged.mount: Deactivated successfully.
Jan 26 10:26:55 compute-0 podman[291901]: 2026-01-26 10:26:55.877373987 +0000 UTC m=+0.451376203 container remove e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 10:26:55 compute-0 systemd[1]: libpod-conmon-e86e45cae68c6f814129f76f9d18d174064e5c942dc4f8084cec03b0e9800937.scope: Deactivated successfully.
Jan 26 10:26:55 compute-0 sudo[291796]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:55 compute-0 sudo[291937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:26:55 compute-0 sudo[291937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:55 compute-0 sudo[291937]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:56 compute-0 sudo[291962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:26:56 compute-0 sudo[291962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:56.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:56 compute-0 nova_compute[254880]: 2026-01-26 10:26:56.217 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:26:56 compute-0 podman[292027]: 2026-01-26 10:26:56.35906611 +0000 UTC m=+0.033900546 container create b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:26:56 compute-0 systemd[1]: Started libpod-conmon-b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a.scope.
Jan 26 10:26:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:26:56 compute-0 podman[292027]: 2026-01-26 10:26:56.42950414 +0000 UTC m=+0.104338596 container init b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_mclean, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 26 10:26:56 compute-0 podman[292027]: 2026-01-26 10:26:56.435253561 +0000 UTC m=+0.110088007 container start b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_mclean, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 10:26:56 compute-0 podman[292027]: 2026-01-26 10:26:56.438144478 +0000 UTC m=+0.112978904 container attach b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:26:56 compute-0 festive_mclean[292044]: 167 167
Jan 26 10:26:56 compute-0 systemd[1]: libpod-b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a.scope: Deactivated successfully.
Jan 26 10:26:56 compute-0 conmon[292044]: conmon b905910e11aff9c17b96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a.scope/container/memory.events
Jan 26 10:26:56 compute-0 podman[292027]: 2026-01-26 10:26:56.34466019 +0000 UTC m=+0.019494656 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:26:56 compute-0 podman[292027]: 2026-01-26 10:26:56.441147627 +0000 UTC m=+0.115982063 container died b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_mclean, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 10:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-947d93080d70611c85e79334ce1d2f543dff99e1827dd5c24aad43c37c4a95ab-merged.mount: Deactivated successfully.
Jan 26 10:26:56 compute-0 podman[292027]: 2026-01-26 10:26:56.479779727 +0000 UTC m=+0.154614163 container remove b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 10:26:56 compute-0 systemd[1]: libpod-conmon-b905910e11aff9c17b96828d6243611da36c4e9cf4de18343e7d6b255392074a.scope: Deactivated successfully.
Jan 26 10:26:56 compute-0 podman[292070]: 2026-01-26 10:26:56.627046333 +0000 UTC m=+0.037562832 container create 9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:26:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:56] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:26:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:26:56] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:26:56 compute-0 systemd[1]: Started libpod-conmon-9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942.scope.
Jan 26 10:26:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82991477020c143b746ca3903d118f84d578d995e4d3e4629c087fa03e59f5e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82991477020c143b746ca3903d118f84d578d995e4d3e4629c087fa03e59f5e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82991477020c143b746ca3903d118f84d578d995e4d3e4629c087fa03e59f5e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82991477020c143b746ca3903d118f84d578d995e4d3e4629c087fa03e59f5e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:26:56 compute-0 podman[292070]: 2026-01-26 10:26:56.610420664 +0000 UTC m=+0.020937183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:26:56 compute-0 podman[292070]: 2026-01-26 10:26:56.706645414 +0000 UTC m=+0.117161933 container init 9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kowalevski, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:26:56 compute-0 podman[292070]: 2026-01-26 10:26:56.712317664 +0000 UTC m=+0.122834163 container start 9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kowalevski, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:26:56 compute-0 podman[292070]: 2026-01-26 10:26:56.715102937 +0000 UTC m=+0.125619456 container attach 9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kowalevski, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:26:56 compute-0 nova_compute[254880]: 2026-01-26 10:26:56.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:56 compute-0 nova_compute[254880]: 2026-01-26 10:26:56.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 10:26:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:26:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:26:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:26:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:26:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:26:57 compute-0 nova_compute[254880]: 2026-01-26 10:26:57.022 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 10:26:57 compute-0 ceph-mon[74456]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 26 10:26:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1505490843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3786943580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:26:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:57.294Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:26:57 compute-0 lvm[292162]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:26:57 compute-0 lvm[292162]: VG ceph_vg0 finished
Jan 26 10:26:57 compute-0 nifty_kowalevski[292087]: {}
Jan 26 10:26:57 compute-0 systemd[1]: libpod-9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942.scope: Deactivated successfully.
Jan 26 10:26:57 compute-0 podman[292070]: 2026-01-26 10:26:57.398483293 +0000 UTC m=+0.808999812 container died 9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 26 10:26:57 compute-0 systemd[1]: libpod-9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942.scope: Consumed 1.075s CPU time.
Jan 26 10:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-82991477020c143b746ca3903d118f84d578d995e4d3e4629c087fa03e59f5e0-merged.mount: Deactivated successfully.
Jan 26 10:26:57 compute-0 podman[292070]: 2026-01-26 10:26:57.44081097 +0000 UTC m=+0.851327469 container remove 9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 10:26:57 compute-0 systemd[1]: libpod-conmon-9aa14fd07df4311070b424489084e369e8e3a66e83866b4866b2b778b7a60942.scope: Deactivated successfully.
Jan 26 10:26:57 compute-0 sudo[291962]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:26:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:26:57 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:57 compute-0 sudo[292176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:26:57 compute-0 sudo[292176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:26:57 compute-0 sudo[292176]: pam_unix(sudo:session): session closed for user root
Jan 26 10:26:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:26:58.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:58 compute-0 ceph-mon[74456]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:26:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:58 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:26:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/4254120803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:26:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/4254120803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:26:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:26:58.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:26:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:26:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:26:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:26:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:26:59.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:26:59 compute-0 nova_compute[254880]: 2026-01-26 10:26:59.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:26:59 compute-0 nova_compute[254880]: 2026-01-26 10:26:59.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 10:27:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:00 compute-0 nova_compute[254880]: 2026-01-26 10:27:00.111 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:00.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:00 compute-0 ceph-mon[74456]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:27:00 compute-0 nova_compute[254880]: 2026-01-26 10:27:00.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:01 compute-0 nova_compute[254880]: 2026-01-26 10:27:01.220 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 26 10:27:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:01.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:02.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:02 compute-0 ceph-mon[74456]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 26 10:27:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:27:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:03.615Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:03.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:27:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:04.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:04 compute-0 ceph-mon[74456]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 26 10:27:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:05 compute-0 nova_compute[254880]: 2026-01-26 10:27:05.118 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:05.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:06.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:06 compute-0 nova_compute[254880]: 2026-01-26 10:27:06.221 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:06 compute-0 ceph-mon[74456]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:06] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:27:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:06] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:27:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:07.296Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:27:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:07.296Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:08.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:08 compute-0 sudo[292212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:27:08 compute-0 sudo[292212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:27:08 compute-0 sudo[292212]: pam_unix(sudo:session): session closed for user root
Jan 26 10:27:08 compute-0 podman[292236]: 2026-01-26 10:27:08.311769351 +0000 UTC m=+0.065021607 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 10:27:08 compute-0 ceph-mon[74456]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:08.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:09.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:10.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:10 compute-0 nova_compute[254880]: 2026-01-26 10:27:10.167 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:10 compute-0 ceph-mon[74456]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:11 compute-0 nova_compute[254880]: 2026-01-26 10:27:11.271 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:11.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:12.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:12 compute-0 ceph-mon[74456]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:13.616Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:27:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:13.616Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:27:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:13.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:14.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:14 compute-0 ceph-mon[74456]: pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:15 compute-0 nova_compute[254880]: 2026-01-26 10:27:15.172 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:15.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:16.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:16 compute-0 nova_compute[254880]: 2026-01-26 10:27:16.274 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:16] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:27:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:16] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:27:16 compute-0 ceph-mon[74456]: pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:17.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:17.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:18.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:27:18
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', '.nfs', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:27:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:27:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:18 compute-0 ceph-mon[74456]: pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:18 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:27:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:27:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:18.930Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:27:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:18.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:27:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:27:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:20 compute-0 nova_compute[254880]: 2026-01-26 10:27:20.175 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:20.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:20 compute-0 ceph-mon[74456]: pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:21 compute-0 nova_compute[254880]: 2026-01-26 10:27:21.278 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:21.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:22.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:22 compute-0 ceph-mon[74456]: pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:23 compute-0 podman[292273]: 2026-01-26 10:27:23.178138534 +0000 UTC m=+0.108987115 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller)
Jan 26 10:27:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:23.617Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:23.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:24.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:24 compute-0 ceph-mon[74456]: pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:25 compute-0 nova_compute[254880]: 2026-01-26 10:27:25.179 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:25.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:26.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:26 compute-0 nova_compute[254880]: 2026-01-26 10:27:26.406 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:26] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:27:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:26] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:27:26 compute-0 ceph-mon[74456]: pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:27.298Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:27.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:27 compute-0 ceph-mon[74456]: pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:28 compute-0 sudo[292303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:27:28 compute-0 sudo[292303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:27:28 compute-0 sudo[292303]: pam_unix(sudo:session): session closed for user root
Jan 26 10:27:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:28.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:27:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:28.932Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:27:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:30 compute-0 nova_compute[254880]: 2026-01-26 10:27:30.183 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:30.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:30 compute-0 ceph-mon[74456]: pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:31 compute-0 nova_compute[254880]: 2026-01-26 10:27:31.407 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:31 compute-0 ceph-mon[74456]: pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:31.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:32.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:33 compute-0 ceph-mon[74456]: pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:33.618Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:33.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:27:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:34.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:35 compute-0 nova_compute[254880]: 2026-01-26 10:27:35.187 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:35 compute-0 ceph-mon[74456]: pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:35.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:27:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:36.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:27:36 compute-0 nova_compute[254880]: 2026-01-26 10:27:36.442 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:36] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:27:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:36] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:27:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:37.299Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:37 compute-0 ceph-mon[74456]: pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:37.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:38.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:38.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:39 compute-0 podman[292340]: 2026-01-26 10:27:39.126034257 +0000 UTC m=+0.055519656 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 10:27:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:39 compute-0 ceph-mon[74456]: pgmap v1370: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:39.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:40 compute-0 nova_compute[254880]: 2026-01-26 10:27:40.190 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:40.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:41 compute-0 nova_compute[254880]: 2026-01-26 10:27:41.456 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:41 compute-0 ceph-mon[74456]: pgmap v1371: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:41.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:42.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:43 compute-0 ceph-mon[74456]: pgmap v1372: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:43.620Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:44.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:44 compute-0 nova_compute[254880]: 2026-01-26 10:27:44.973 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:45 compute-0 nova_compute[254880]: 2026-01-26 10:27:45.193 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:45 compute-0 ceph-mon[74456]: pgmap v1373: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:45.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:45 compute-0 nova_compute[254880]: 2026-01-26 10:27:45.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:46.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:46 compute-0 nova_compute[254880]: 2026-01-26 10:27:46.458 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:46] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:27:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:46] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:27:46 compute-0 nova_compute[254880]: 2026-01-26 10:27:46.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:46 compute-0 nova_compute[254880]: 2026-01-26 10:27:46.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:27:46 compute-0 nova_compute[254880]: 2026-01-26 10:27:46.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:27:46 compute-0 nova_compute[254880]: 2026-01-26 10:27:46.979 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:27:46 compute-0 nova_compute[254880]: 2026-01-26 10:27:46.980 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.010 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.010 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.010 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.011 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.011 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:27:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:47.300Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:27:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695622730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.445 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:27:47 compute-0 ceph-mon[74456]: pgmap v1374: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:47 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2695622730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.600 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.601 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4504MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.602 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.602 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.670 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.670 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:27:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:47.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.816 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing inventories for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.891 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating ProviderTree inventory for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.892 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Updating inventory in ProviderTree for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.912 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing aggregate associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.941 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Refreshing trait associations for resource provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf, traits: COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 10:27:47 compute-0 nova_compute[254880]: 2026-01-26 10:27:47.957 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:27:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:48.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:48 compute-0 sudo[292410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:27:48 compute-0 sudo[292410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:27:48 compute-0 sudo[292410]: pam_unix(sudo:session): session closed for user root
Jan 26 10:27:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:27:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215905270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:48 compute-0 nova_compute[254880]: 2026-01-26 10:27:48.438 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:27:48 compute-0 nova_compute[254880]: 2026-01-26 10:27:48.443 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:27:48 compute-0 nova_compute[254880]: 2026-01-26 10:27:48.462 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:27:48 compute-0 nova_compute[254880]: 2026-01-26 10:27:48.463 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:27:48 compute-0 nova_compute[254880]: 2026-01-26 10:27:48.464 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:27:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1215905270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:27:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:27:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:27:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:27:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:27:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:27:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:27:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:48.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:27:49 compute-0 ceph-mon[74456]: pgmap v1375: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:49.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:50 compute-0 nova_compute[254880]: 2026-01-26 10:27:50.197 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:50.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:51 compute-0 nova_compute[254880]: 2026-01-26 10:27:51.462 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:51 compute-0 ceph-mon[74456]: pgmap v1376: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1866049283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:51.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:52.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:52 compute-0 nova_compute[254880]: 2026-01-26 10:27:52.443 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:52 compute-0 nova_compute[254880]: 2026-01-26 10:27:52.443 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:52 compute-0 nova_compute[254880]: 2026-01-26 10:27:52.443 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:52 compute-0 nova_compute[254880]: 2026-01-26 10:27:52.444 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:52 compute-0 nova_compute[254880]: 2026-01-26 10:27:52.444 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:27:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3376433885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:52 compute-0 nova_compute[254880]: 2026-01-26 10:27:52.953 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:53 compute-0 ceph-mon[74456]: pgmap v1377: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:53.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:27:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:53.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:27:54 compute-0 podman[292443]: 2026-01-26 10:27:54.148155267 +0000 UTC m=+0.082847842 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:27:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:27:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:54.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:27:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:27:54.715 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:27:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:27:54.716 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:27:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:27:54.716 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:27:54 compute-0 nova_compute[254880]: 2026-01-26 10:27:54.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:27:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:27:55 compute-0 nova_compute[254880]: 2026-01-26 10:27:55.198 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:55 compute-0 ceph-mon[74456]: pgmap v1378: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:27:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:55.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:56 compute-0 nova_compute[254880]: 2026-01-26 10:27:56.463 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:27:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1747333867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:56 compute-0 sshd-session[292471]: Received disconnect from 117.50.196.2 port 41770:11:  [preauth]
Jan 26 10:27:56 compute-0 sshd-session[292471]: Disconnected from authenticating user root 117.50.196.2 port 41770 [preauth]
Jan 26 10:27:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:56] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:27:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:27:56] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:27:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:27:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:27:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:27:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:27:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:27:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:57.300Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:27:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4282693921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:27:57 compute-0 ceph-mon[74456]: pgmap v1379: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:57.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:57 compute-0 sudo[292476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:27:57 compute-0 sudo[292476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:27:57 compute-0 sudo[292476]: pam_unix(sudo:session): session closed for user root
Jan 26 10:27:57 compute-0 sudo[292501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 26 10:27:57 compute-0 sudo[292501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:27:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:27:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:27:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:27:58 compute-0 podman[292598]: 2026-01-26 10:27:58.451415241 +0000 UTC m=+0.053532044 container exec 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Jan 26 10:27:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/765544144' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:27:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/765544144' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:27:58 compute-0 podman[292598]: 2026-01-26 10:27:58.545576185 +0000 UTC m=+0.147692958 container exec_died 3b123b7595d9c9d9316b1a8ea4d959d9d6c7c23d8a2432610714fc468c22d66a (image=quay.io/ceph/ceph:v19, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 26 10:27:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:58.935Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:27:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:27:58.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:27:58 compute-0 podman[292718]: 2026-01-26 10:27:58.979254782 +0000 UTC m=+0.100788704 container exec 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:27:59 compute-0 podman[292744]: 2026-01-26 10:27:59.067371304 +0000 UTC m=+0.071882286 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:27:59 compute-0 podman[292718]: 2026-01-26 10:27:59.071894896 +0000 UTC m=+0.193428808 container exec_died 1fdcd1ef5dc3a17c5633909f330f7ba23d710bf5a809a108a68127d055b30c71 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:27:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:59 compute-0 podman[292812]: 2026-01-26 10:27:59.389031644 +0000 UTC m=+0.056824050 container exec 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:27:59 compute-0 podman[292812]: 2026-01-26 10:27:59.406702381 +0000 UTC m=+0.074494777 container exec_died 30687b991877ce56126a0423776942e639cc0488e2a92116947c3c0dae468e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:27:59 compute-0 ceph-mon[74456]: pgmap v1380: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:27:59 compute-0 podman[292875]: 2026-01-26 10:27:59.634610607 +0000 UTC m=+0.055721851 container exec 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 10:27:59 compute-0 podman[292875]: 2026-01-26 10:27:59.641491602 +0000 UTC m=+0.062602816 container exec_died 546bc7703a88da8278c63e244aa62a655cacf7b9ac80242d9a1c562322742653 (image=quay.io/ceph/haproxy:2.3, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-haproxy-nfs-cephfs-compute-0-eucyze)
Jan 26 10:27:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:27:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:27:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:27:59.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:27:59 compute-0 podman[292941]: 2026-01-26 10:27:59.845145415 +0000 UTC m=+0.051794575 container exec 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, build-date=2023-02-22T09:23:20, release=1793, vcs-type=git)
Jan 26 10:27:59 compute-0 podman[292941]: 2026-01-26 10:27:59.858630059 +0000 UTC m=+0.065279199 container exec_died 14bcbdcf0f31013bc7fe914af7f7b7358855c7c6a039a7319c11716e75b73396 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-keepalived-nfs-cephfs-compute-0-orrhyj, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4)
Jan 26 10:28:00 compute-0 podman[293001]: 2026-01-26 10:28:00.077913362 +0000 UTC m=+0.054543820 container exec c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:28:00 compute-0 podman[293001]: 2026-01-26 10:28:00.110633984 +0000 UTC m=+0.087264422 container exec_died c69b7a4f7308fa34c589fbd8c0cc697a2f34b962ff5155c71e280b4730971a1c (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:28:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:00 compute-0 nova_compute[254880]: 2026-01-26 10:28:00.200 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:00 compute-0 podman[293071]: 2026-01-26 10:28:00.325277242 +0000 UTC m=+0.046953255 container exec ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 10:28:00 compute-0 podman[293071]: 2026-01-26 10:28:00.510658603 +0000 UTC m=+0.232334586 container exec_died ade92210eaf6e60d92ec4adb3dcec6d668b7e9592325fa9e516664d1c7c6181e (image=quay.io/ceph/grafana:10.4.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 26 10:28:00 compute-0 podman[293187]: 2026-01-26 10:28:00.898691011 +0000 UTC m=+0.048947899 container exec 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:28:00 compute-0 podman[293187]: 2026-01-26 10:28:00.935555224 +0000 UTC m=+0.085812092 container exec_died 61572bd53ebb45ea00a31c00c800a7d6efb6f6b2839e92cef2ab638b566e5488 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 26 10:28:00 compute-0 sudo[292501]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:28:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:28:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:01 compute-0 sudo[293231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:28:01 compute-0 sudo[293231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:01 compute-0 sudo[293231]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:01 compute-0 sudo[293256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:28:01 compute-0 sudo[293256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:01 compute-0 nova_compute[254880]: 2026-01-26 10:28:01.465 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:01 compute-0 sudo[293256]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:28:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:28:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:28:01 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:28:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:28:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:28:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:02 compute-0 ceph-mon[74456]: pgmap v1381: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:28:02 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:28:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:28:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:28:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:28:02 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:28:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:28:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:28:02 compute-0 sudo[293314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:28:02 compute-0 sudo[293314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:02 compute-0 sudo[293314]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:02 compute-0 sudo[293339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:28:02 compute-0 sudo[293339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:02 compute-0 podman[293408]: 2026-01-26 10:28:02.686408294 +0000 UTC m=+0.044726216 container create b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Jan 26 10:28:02 compute-0 systemd[1]: Started libpod-conmon-b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079.scope.
Jan 26 10:28:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:28:02 compute-0 podman[293408]: 2026-01-26 10:28:02.667833864 +0000 UTC m=+0.026151806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:28:02 compute-0 podman[293408]: 2026-01-26 10:28:02.76385825 +0000 UTC m=+0.122176192 container init b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 26 10:28:02 compute-0 podman[293408]: 2026-01-26 10:28:02.769912532 +0000 UTC m=+0.128230454 container start b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 10:28:02 compute-0 podman[293408]: 2026-01-26 10:28:02.77316684 +0000 UTC m=+0.131484762 container attach b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 26 10:28:02 compute-0 loving_hertz[293425]: 167 167
Jan 26 10:28:02 compute-0 systemd[1]: libpod-b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079.scope: Deactivated successfully.
Jan 26 10:28:02 compute-0 podman[293408]: 2026-01-26 10:28:02.774975608 +0000 UTC m=+0.133293530 container died b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 10:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb39e7f43d41c7fbba2bcb5ca830cf6142db6c7eb28c0c1547e5c85a9e2aed2-merged.mount: Deactivated successfully.
Jan 26 10:28:02 compute-0 podman[293408]: 2026-01-26 10:28:02.810955707 +0000 UTC m=+0.169273629 container remove b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 10:28:02 compute-0 systemd[1]: libpod-conmon-b14f7eabd445e6e898f6a83fac0457fa6b574b34fca26eaeaaea6746f997e079.scope: Deactivated successfully.
Jan 26 10:28:02 compute-0 podman[293450]: 2026-01-26 10:28:02.973198835 +0000 UTC m=+0.041789666 container create 03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:28:03 compute-0 systemd[1]: Started libpod-conmon-03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819.scope.
Jan 26 10:28:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62719a7a6884b7857e8e759a41057406c0e3f7474570bc465cdeaf88966e60a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62719a7a6884b7857e8e759a41057406c0e3f7474570bc465cdeaf88966e60a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62719a7a6884b7857e8e759a41057406c0e3f7474570bc465cdeaf88966e60a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62719a7a6884b7857e8e759a41057406c0e3f7474570bc465cdeaf88966e60a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62719a7a6884b7857e8e759a41057406c0e3f7474570bc465cdeaf88966e60a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:03 compute-0 podman[293450]: 2026-01-26 10:28:02.956225499 +0000 UTC m=+0.024816350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:28:03 compute-0 podman[293450]: 2026-01-26 10:28:03.058528323 +0000 UTC m=+0.127119154 container init 03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 10:28:03 compute-0 podman[293450]: 2026-01-26 10:28:03.064290618 +0000 UTC m=+0.132881439 container start 03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:28:03 compute-0 podman[293450]: 2026-01-26 10:28:03.067493364 +0000 UTC m=+0.136084195 container attach 03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:28:03 compute-0 ceph-mon[74456]: pgmap v1382: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:28:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:28:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:28:03 compute-0 cranky_bardeen[293466]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:28:03 compute-0 cranky_bardeen[293466]: --> All data devices are unavailable
Jan 26 10:28:03 compute-0 systemd[1]: libpod-03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819.scope: Deactivated successfully.
Jan 26 10:28:03 compute-0 podman[293450]: 2026-01-26 10:28:03.420083237 +0000 UTC m=+0.488674088 container died 03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-62719a7a6884b7857e8e759a41057406c0e3f7474570bc465cdeaf88966e60a6-merged.mount: Deactivated successfully.
Jan 26 10:28:03 compute-0 podman[293450]: 2026-01-26 10:28:03.459031706 +0000 UTC m=+0.527622537 container remove 03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 10:28:03 compute-0 systemd[1]: libpod-conmon-03eac45704502055e24494235e0a9ed39bebc3af3f379ecaa240e170f0d2a819.scope: Deactivated successfully.
Jan 26 10:28:03 compute-0 sudo[293339]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:03 compute-0 sudo[293492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:28:03 compute-0 sudo[293492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:03 compute-0 sudo[293492]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:03.621Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:03 compute-0 sudo[293517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:28:03 compute-0 sudo[293517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:28:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:04 compute-0 podman[293582]: 2026-01-26 10:28:04.033982507 +0000 UTC m=+0.039398672 container create c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 26 10:28:04 compute-0 systemd[1]: Started libpod-conmon-c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3.scope.
Jan 26 10:28:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:28:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:04 compute-0 podman[293582]: 2026-01-26 10:28:04.094100185 +0000 UTC m=+0.099516380 container init c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_wilbur, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:28:04 compute-0 podman[293582]: 2026-01-26 10:28:04.101465533 +0000 UTC m=+0.106881698 container start c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_wilbur, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:28:04 compute-0 podman[293582]: 2026-01-26 10:28:04.104527936 +0000 UTC m=+0.109944111 container attach c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:28:04 compute-0 wizardly_wilbur[293598]: 167 167
Jan 26 10:28:04 compute-0 systemd[1]: libpod-c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3.scope: Deactivated successfully.
Jan 26 10:28:04 compute-0 podman[293582]: 2026-01-26 10:28:04.106289223 +0000 UTC m=+0.111705418 container died c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_wilbur, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 10:28:04 compute-0 podman[293582]: 2026-01-26 10:28:04.01702454 +0000 UTC m=+0.022440725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1a0570b6517fd56c3879e69c8a01f45dfd980ebc282a252e75603acd95d5846-merged.mount: Deactivated successfully.
Jan 26 10:28:04 compute-0 podman[293582]: 2026-01-26 10:28:04.140534765 +0000 UTC m=+0.145950930 container remove c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_wilbur, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 26 10:28:04 compute-0 systemd[1]: libpod-conmon-c78efd135e184e419ecf386cd99ae72652d5c27f90dfa7f40bceedfce6e40ab3.scope: Deactivated successfully.
Jan 26 10:28:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:04.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:04 compute-0 podman[293621]: 2026-01-26 10:28:04.305358632 +0000 UTC m=+0.039171794 container create 0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_raman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:28:04 compute-0 systemd[1]: Started libpod-conmon-0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf.scope.
Jan 26 10:28:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8572f9f79de5779be0feccc44f0764152af6fc95cbe49bee51b511fc5ae573e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8572f9f79de5779be0feccc44f0764152af6fc95cbe49bee51b511fc5ae573e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8572f9f79de5779be0feccc44f0764152af6fc95cbe49bee51b511fc5ae573e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8572f9f79de5779be0feccc44f0764152af6fc95cbe49bee51b511fc5ae573e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:04 compute-0 podman[293621]: 2026-01-26 10:28:04.288743896 +0000 UTC m=+0.022557068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:28:04 compute-0 podman[293621]: 2026-01-26 10:28:04.397238427 +0000 UTC m=+0.131051599 container init 0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_raman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 10:28:04 compute-0 podman[293621]: 2026-01-26 10:28:04.403523206 +0000 UTC m=+0.137336368 container start 0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:28:04 compute-0 podman[293621]: 2026-01-26 10:28:04.406896267 +0000 UTC m=+0.140709429 container attach 0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_raman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 26 10:28:04 compute-0 dreamy_raman[293638]: {
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:     "0": [
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:         {
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "devices": [
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "/dev/loop3"
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             ],
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "lv_name": "ceph_lv0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "lv_size": "21470642176",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "name": "ceph_lv0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "tags": {
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.cluster_name": "ceph",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.crush_device_class": "",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.encrypted": "0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.osd_id": "0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.type": "block",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.vdo": "0",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:                 "ceph.with_tpm": "0"
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             },
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "type": "block",
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:             "vg_name": "ceph_vg0"
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:         }
Jan 26 10:28:04 compute-0 dreamy_raman[293638]:     ]
Jan 26 10:28:04 compute-0 dreamy_raman[293638]: }
Jan 26 10:28:04 compute-0 systemd[1]: libpod-0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf.scope: Deactivated successfully.
Jan 26 10:28:04 compute-0 podman[293621]: 2026-01-26 10:28:04.690615696 +0000 UTC m=+0.424428858 container died 0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 26 10:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8572f9f79de5779be0feccc44f0764152af6fc95cbe49bee51b511fc5ae573e-merged.mount: Deactivated successfully.
Jan 26 10:28:04 compute-0 podman[293621]: 2026-01-26 10:28:04.867907289 +0000 UTC m=+0.601720451 container remove 0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_raman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:28:04 compute-0 sudo[293517]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:04 compute-0 systemd[1]: libpod-conmon-0c8ab1c0d47d0a48a7f8cb9d4eebd1521d294a8d93a0fb54b9e8c9f596832dcf.scope: Deactivated successfully.
Jan 26 10:28:04 compute-0 sudo[293661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:28:04 compute-0 sudo[293661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:04 compute-0 sudo[293661]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:05 compute-0 sudo[293686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:28:05 compute-0 sudo[293686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:05 compute-0 nova_compute[254880]: 2026-01-26 10:28:05.205 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:05 compute-0 ceph-mon[74456]: pgmap v1383: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:05 compute-0 podman[293751]: 2026-01-26 10:28:05.454876572 +0000 UTC m=+0.048808194 container create 2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 10:28:05 compute-0 systemd[1]: Started libpod-conmon-2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c.scope.
Jan 26 10:28:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:28:05 compute-0 podman[293751]: 2026-01-26 10:28:05.43583433 +0000 UTC m=+0.029765982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:28:05 compute-0 podman[293751]: 2026-01-26 10:28:05.622810804 +0000 UTC m=+0.216742446 container init 2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_goldberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 10:28:05 compute-0 podman[293751]: 2026-01-26 10:28:05.636745139 +0000 UTC m=+0.230676761 container start 2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:28:05 compute-0 podman[293751]: 2026-01-26 10:28:05.640550622 +0000 UTC m=+0.234482244 container attach 2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:28:05 compute-0 epic_goldberg[293767]: 167 167
Jan 26 10:28:05 compute-0 systemd[1]: libpod-2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c.scope: Deactivated successfully.
Jan 26 10:28:05 compute-0 podman[293751]: 2026-01-26 10:28:05.644651032 +0000 UTC m=+0.238582654 container died 2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 10:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a1454ab509eec18b753e612441f7fe5e0de481c57c4bf184d3e6f76869fde6-merged.mount: Deactivated successfully.
Jan 26 10:28:05 compute-0 podman[293751]: 2026-01-26 10:28:05.686278583 +0000 UTC m=+0.280210205 container remove 2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:28:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:05 compute-0 systemd[1]: libpod-conmon-2cfc007e03494b05f0efae09b57a8ee05648428af57c3ea09e205c8c70b86a8c.scope: Deactivated successfully.
Jan 26 10:28:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:05 compute-0 podman[293792]: 2026-01-26 10:28:05.867239095 +0000 UTC m=+0.028650932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:28:06 compute-0 podman[293792]: 2026-01-26 10:28:06.069613204 +0000 UTC m=+0.231025011 container create 5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cerf, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 10:28:06 compute-0 systemd[1]: Started libpod-conmon-5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8.scope.
Jan 26 10:28:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c943913ce0f8224fe02e0d6f9e6ec8ada9d734493cc8ba67ece1d105bd700e70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c943913ce0f8224fe02e0d6f9e6ec8ada9d734493cc8ba67ece1d105bd700e70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c943913ce0f8224fe02e0d6f9e6ec8ada9d734493cc8ba67ece1d105bd700e70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c943913ce0f8224fe02e0d6f9e6ec8ada9d734493cc8ba67ece1d105bd700e70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:28:06 compute-0 podman[293792]: 2026-01-26 10:28:06.134631685 +0000 UTC m=+0.296043522 container init 5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cerf, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:28:06 compute-0 podman[293792]: 2026-01-26 10:28:06.143957256 +0000 UTC m=+0.305369063 container start 5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 26 10:28:06 compute-0 podman[293792]: 2026-01-26 10:28:06.147120691 +0000 UTC m=+0.308532518 container attach 5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 10:28:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:06.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:06 compute-0 nova_compute[254880]: 2026-01-26 10:28:06.466 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:06] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:28:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:06] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:28:06 compute-0 lvm[293885]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:28:06 compute-0 lvm[293885]: VG ceph_vg0 finished
Jan 26 10:28:06 compute-0 distracted_cerf[293809]: {}
Jan 26 10:28:06 compute-0 systemd[1]: libpod-5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8.scope: Deactivated successfully.
Jan 26 10:28:06 compute-0 podman[293792]: 2026-01-26 10:28:06.860895219 +0000 UTC m=+1.022307026 container died 5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 26 10:28:06 compute-0 systemd[1]: libpod-5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8.scope: Consumed 1.163s CPU time.
Jan 26 10:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c943913ce0f8224fe02e0d6f9e6ec8ada9d734493cc8ba67ece1d105bd700e70-merged.mount: Deactivated successfully.
Jan 26 10:28:06 compute-0 podman[293792]: 2026-01-26 10:28:06.908990184 +0000 UTC m=+1.070401991 container remove 5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:28:06 compute-0 systemd[1]: libpod-conmon-5e38e76343a24719cc011b4b21f54aeecebd808b717b23b4bed63f4e0ec877a8.scope: Deactivated successfully.
Jan 26 10:28:06 compute-0 sudo[293686]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:28:06 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:28:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:07 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:07 compute-0 sudo[293902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:28:07 compute-0 sudo[293902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:07 compute-0 sudo[293902]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:07.301Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:07 compute-0 ceph-mon[74456]: pgmap v1384: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:07 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:07 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:28:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:08.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:08 compute-0 ceph-mon[74456]: pgmap v1385: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:08 compute-0 sudo[293927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:28:08 compute-0 sudo[293927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:08 compute-0 sudo[293927]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:08.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:09 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:10 compute-0 podman[293954]: 2026-01-26 10:28:10.117957042 +0000 UTC m=+0.053174533 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 10:28:10 compute-0 nova_compute[254880]: 2026-01-26 10:28:10.208 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:10.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:10 compute-0 ceph-mon[74456]: pgmap v1386: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:11 compute-0 nova_compute[254880]: 2026-01-26 10:28:11.468 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:11 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:11.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:13 compute-0 ceph-mon[74456]: pgmap v1387: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.022412) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423293022535, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 945, "num_deletes": 251, "total_data_size": 1650913, "memory_usage": 1675504, "flush_reason": "Manual Compaction"}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423293035369, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1584837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38462, "largest_seqno": 39405, "table_properties": {"data_size": 1580168, "index_size": 2257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10334, "raw_average_key_size": 19, "raw_value_size": 1570776, "raw_average_value_size": 3009, "num_data_blocks": 99, "num_entries": 522, "num_filter_entries": 522, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769423215, "oldest_key_time": 1769423215, "file_creation_time": 1769423293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 12999 microseconds, and 6608 cpu microseconds.
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.035429) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1584837 bytes OK
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.035456) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.037151) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.037167) EVENT_LOG_v1 {"time_micros": 1769423293037162, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.037216) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1646471, prev total WAL file size 1646471, number of live WAL files 2.
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.037995) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1547KB)], [83(14MB)]
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423293038063, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 17097575, "oldest_snapshot_seqno": -1}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6922 keys, 14808184 bytes, temperature: kUnknown
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423293133160, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 14808184, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14762927, "index_size": 26841, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 182440, "raw_average_key_size": 26, "raw_value_size": 14639226, "raw_average_value_size": 2114, "num_data_blocks": 1053, "num_entries": 6922, "num_filter_entries": 6922, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769423293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.133485) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 14808184 bytes
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.135141) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.6 rd, 155.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 14.8 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(20.1) write-amplify(9.3) OK, records in: 7438, records dropped: 516 output_compression: NoCompression
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.135161) EVENT_LOG_v1 {"time_micros": 1769423293135153, "job": 48, "event": "compaction_finished", "compaction_time_micros": 95223, "compaction_time_cpu_micros": 50108, "output_level": 6, "num_output_files": 1, "total_output_size": 14808184, "num_input_records": 7438, "num_output_records": 6922, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423293135695, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423293139275, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.037858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.139376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.139382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.139385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.139387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:28:13 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:28:13.139389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:28:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:13.624Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:13 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:14.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:15 compute-0 ceph-mon[74456]: pgmap v1388: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:15 compute-0 nova_compute[254880]: 2026-01-26 10:28:15.213 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:15 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:15.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:16.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:16 compute-0 nova_compute[254880]: 2026-01-26 10:28:16.470 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:16] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:28:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:16] "GET /metrics HTTP/1.1" 200 48535 "" "Prometheus/2.51.0"
Jan 26 10:28:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:17 compute-0 ceph-mon[74456]: pgmap v1389: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:17.303Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:17 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:17.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:28:18
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['default.rgw.control', '.nfs', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'backups']
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:28:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:28:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:28:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:28:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:18.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:19 compute-0 ceph-mon[74456]: pgmap v1390: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:28:19 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:28:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:19.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:28:20 compute-0 nova_compute[254880]: 2026-01-26 10:28:20.217 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:20.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:20 compute-0 ceph-mon[74456]: pgmap v1391: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:21 compute-0 nova_compute[254880]: 2026-01-26 10:28:21.474 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:21 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:22 compute-0 ceph-mon[74456]: pgmap v1392: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:23.625Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:28:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:23.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:28:23 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:28:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:23.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:28:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:24.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:25 compute-0 ceph-mon[74456]: pgmap v1393: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:25 compute-0 podman[293990]: 2026-01-26 10:28:25.150044118 +0000 UTC m=+0.087495246 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 10:28:25 compute-0 nova_compute[254880]: 2026-01-26 10:28:25.218 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:25 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:28:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:25.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:28:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:26 compute-0 nova_compute[254880]: 2026-01-26 10:28:26.476 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:26] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:28:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:26] "GET /metrics HTTP/1.1" 200 48537 "" "Prometheus/2.51.0"
Jan 26 10:28:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:27 compute-0 ceph-mon[74456]: pgmap v1394: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:28:27 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 8772 writes, 39K keys, 8771 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 8772 writes, 8771 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1554 writes, 7467 keys, 1554 commit groups, 1.0 writes per commit group, ingest: 11.81 MB, 0.02 MB/s
                                           Interval WAL: 1554 writes, 1554 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     89.4      0.71              0.18        24    0.030       0      0       0.0       0.0
                                             L6      1/0   14.12 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.7    163.6    140.5      2.15              0.71        23    0.094    136K    13K       0.0       0.0
                                            Sum      1/0   14.12 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.7    122.9    127.8      2.87              0.89        47    0.061    136K    13K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    132.9    133.4      0.70              0.24        12    0.058     43K   3613       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    163.6    140.5      2.15              0.71        23    0.094    136K    13K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    140.1      0.45              0.18        23    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.26              0.00         1    0.259       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.062, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.12 MB/s write, 0.34 GB read, 0.12 MB/s read, 2.9 seconds
                                           Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a9cd69b350#2 capacity: 304.00 MB usage: 33.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000263 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1956,31.95 MB,10.5112%) FilterBlock(48,395.92 KB,0.127185%) IndexBlock(48,675.75 KB,0.217076%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 10:28:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:27.304Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:27 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:27.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:28.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:28 compute-0 sudo[294020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:28:28 compute-0 sudo[294020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:28 compute-0 sudo[294020]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:28.938Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:29 compute-0 ceph-mon[74456]: pgmap v1395: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:29 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:29.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:30 compute-0 nova_compute[254880]: 2026-01-26 10:28:30.222 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:31 compute-0 ceph-mon[74456]: pgmap v1396: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:31 compute-0 nova_compute[254880]: 2026-01-26 10:28:31.477 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:31 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:28:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:31.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:28:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:33 compute-0 ceph-mon[74456]: pgmap v1397: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:33.626Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:28:33 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:28:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:33.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:34.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:35 compute-0 ceph-mon[74456]: pgmap v1398: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:35 compute-0 nova_compute[254880]: 2026-01-26 10:28:35.249 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:35 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:35.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:36 compute-0 nova_compute[254880]: 2026-01-26 10:28:36.505 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:36] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:28:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:36] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:28:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:37 compute-0 ceph-mon[74456]: pgmap v1399: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:37.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:37 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:38.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:38.939Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:28:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:38.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:28:39 compute-0 ceph-mon[74456]: pgmap v1400: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:39 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:39.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:40 compute-0 nova_compute[254880]: 2026-01-26 10:28:40.304 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:28:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:28:41 compute-0 podman[294058]: 2026-01-26 10:28:41.115179376 +0000 UTC m=+0.048940168 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 26 10:28:41 compute-0 ceph-mon[74456]: pgmap v1401: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:41 compute-0 nova_compute[254880]: 2026-01-26 10:28:41.505 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:41 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:41.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:43 compute-0 ceph-mon[74456]: pgmap v1402: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:43.627Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:43 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:43.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:44 compute-0 nova_compute[254880]: 2026-01-26 10:28:44.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:45 compute-0 nova_compute[254880]: 2026-01-26 10:28:45.307 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:45 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:45.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:46 compute-0 nova_compute[254880]: 2026-01-26 10:28:46.507 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:46 compute-0 ceph-mon[74456]: pgmap v1403: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:46] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:28:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:46] "GET /metrics HTTP/1.1" 200 48539 "" "Prometheus/2.51.0"
Jan 26 10:28:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:47.306Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:47 compute-0 ceph-mon[74456]: pgmap v1404: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:47 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:47 compute-0 nova_compute[254880]: 2026-01-26 10:28:47.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:47 compute-0 nova_compute[254880]: 2026-01-26 10:28:47.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:47 compute-0 nova_compute[254880]: 2026-01-26 10:28:47.981 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:28:47 compute-0 nova_compute[254880]: 2026-01-26 10:28:47.981 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:28:47 compute-0 nova_compute[254880]: 2026-01-26 10:28:47.981 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:28:47 compute-0 nova_compute[254880]: 2026-01-26 10:28:47.982 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:28:47 compute-0 nova_compute[254880]: 2026-01-26 10:28:47.982 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:28:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:28:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3262378480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.436 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.599 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.600 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4499MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.601 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.601 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:28:48 compute-0 sudo[294107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:28:48 compute-0 sudo[294107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:28:48 compute-0 sudo[294107]: pam_unix(sudo:session): session closed for user root
Jan 26 10:28:48 compute-0 ceph-mon[74456]: pgmap v1405: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3262378480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.703 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.704 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:28:48 compute-0 nova_compute[254880]: 2026-01-26 10:28:48.728 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:28:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:28:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:28:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:28:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:28:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:28:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:28:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:28:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:48.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:28:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947100076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:49 compute-0 nova_compute[254880]: 2026-01-26 10:28:49.153 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:28:49 compute-0 nova_compute[254880]: 2026-01-26 10:28:49.158 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:28:49 compute-0 nova_compute[254880]: 2026-01-26 10:28:49.181 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:28:49 compute-0 nova_compute[254880]: 2026-01-26 10:28:49.182 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:28:49 compute-0 nova_compute[254880]: 2026-01-26 10:28:49.182 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:28:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:28:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/947100076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:49 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:49.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:50 compute-0 nova_compute[254880]: 2026-01-26 10:28:50.183 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:50 compute-0 nova_compute[254880]: 2026-01-26 10:28:50.183 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:28:50 compute-0 nova_compute[254880]: 2026-01-26 10:28:50.184 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:28:50 compute-0 nova_compute[254880]: 2026-01-26 10:28:50.200 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:28:50 compute-0 nova_compute[254880]: 2026-01-26 10:28:50.312 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:50.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:50 compute-0 ceph-mon[74456]: pgmap v1406: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:51 compute-0 nova_compute[254880]: 2026-01-26 10:28:51.663 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3768636190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:51 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:51 compute-0 nova_compute[254880]: 2026-01-26 10:28:51.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:51 compute-0 nova_compute[254880]: 2026-01-26 10:28:51.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:51 compute-0 nova_compute[254880]: 2026-01-26 10:28:51.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:52 compute-0 ceph-mon[74456]: pgmap v1407: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2897644853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:53.629Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:53 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:53 compute-0 nova_compute[254880]: 2026-01-26 10:28:53.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:53 compute-0 nova_compute[254880]: 2026-01-26 10:28:53.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:28:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:28:54.716 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:28:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:28:54.717 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:28:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:28:54.717 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:28:54 compute-0 ceph-mon[74456]: pgmap v1408: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:55 compute-0 nova_compute[254880]: 2026-01-26 10:28:55.315 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:28:55 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:55.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:56 compute-0 podman[294161]: 2026-01-26 10:28:56.146603665 +0000 UTC m=+0.079204513 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 10:28:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:56 compute-0 nova_compute[254880]: 2026-01-26 10:28:56.511 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:28:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:56] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Jan 26 10:28:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:28:56] "GET /metrics HTTP/1.1" 200 48532 "" "Prometheus/2.51.0"
Jan 26 10:28:56 compute-0 ceph-mon[74456]: pgmap v1409: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:28:56 compute-0 nova_compute[254880]: 2026-01-26 10:28:56.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:28:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:28:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:28:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:28:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:28:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:28:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:57.307Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:57 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1175763920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:28:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:57.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:28:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:28:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:28:58.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:28:58 compute-0 ceph-mon[74456]: pgmap v1410: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2657310510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:28:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3037181311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:28:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/3037181311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:28:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:28:58.941Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:28:59 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:28:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:28:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 26 10:28:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:28:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 26 10:29:00 compute-0 nova_compute[254880]: 2026-01-26 10:29:00.318 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:00.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:00 compute-0 ceph-mon[74456]: pgmap v1411: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:01 compute-0 nova_compute[254880]: 2026-01-26 10:29:01.513 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:01 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:01.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:02.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:02 compute-0 ceph-mon[74456]: pgmap v1412: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:03.630Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:03 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:29:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:03.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:03 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:05 compute-0 ceph-mon[74456]: pgmap v1413: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:05 compute-0 nova_compute[254880]: 2026-01-26 10:29:05.322 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:05 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:05.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:06.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 26 10:29:06 compute-0 nova_compute[254880]: 2026-01-26 10:29:06.515 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:06] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:29:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:06] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:29:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:06 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:07 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:07 compute-0 ceph-mon[74456]: pgmap v1414: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:07 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:07.307Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:07 compute-0 sudo[294199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:29:07 compute-0 sudo[294199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:07 compute-0 sudo[294199]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:07 compute-0 sudo[294224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 26 10:29:07 compute-0 sudo[294224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:07 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:07 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:07 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:07 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:07.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:07 compute-0 sudo[294224]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:29:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 10:29:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:29:08 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 10:29:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 26 10:29:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 10:29:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:29:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 10:29:08 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:29:08 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:29:08 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:08 compute-0 sudo[294282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:29:08 compute-0 sudo[294282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:08 compute-0 sudo[294282]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:08 compute-0 sudo[294307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 26 10:29:08 compute-0 sudo[294307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:08 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:08 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:08 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:08.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:08 compute-0 podman[294374]: 2026-01-26 10:29:08.645062418 +0000 UTC m=+0.050064270 container create 3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:29:08 compute-0 systemd[1]: Started libpod-conmon-3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f.scope.
Jan 26 10:29:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:29:08 compute-0 podman[294374]: 2026-01-26 10:29:08.624136284 +0000 UTC m=+0.029138176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:29:08 compute-0 podman[294374]: 2026-01-26 10:29:08.725298628 +0000 UTC m=+0.130300500 container init 3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_banach, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 26 10:29:08 compute-0 podman[294374]: 2026-01-26 10:29:08.731904586 +0000 UTC m=+0.136906438 container start 3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:29:08 compute-0 podman[294374]: 2026-01-26 10:29:08.735211784 +0000 UTC m=+0.140213636 container attach 3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_banach, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:29:08 compute-0 hardcore_banach[294391]: 167 167
Jan 26 10:29:08 compute-0 systemd[1]: libpod-3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f.scope: Deactivated successfully.
Jan 26 10:29:08 compute-0 podman[294374]: 2026-01-26 10:29:08.739086899 +0000 UTC m=+0.144088771 container died 3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:29:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d62f2727c2ddc60374d644eb8c785fd51f435580c6f9db4a54f80c56906aeef1-merged.mount: Deactivated successfully.
Jan 26 10:29:08 compute-0 sudo[294394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:29:08 compute-0 sudo[294394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:08 compute-0 sudo[294394]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:08 compute-0 podman[294374]: 2026-01-26 10:29:08.78333788 +0000 UTC m=+0.188339732 container remove 3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 10:29:08 compute-0 systemd[1]: libpod-conmon-3ed20b037574f3c926f78601acd26983571e8d09c56174dfc7c78331ff99481f.scope: Deactivated successfully.
Jan 26 10:29:08 compute-0 podman[294440]: 2026-01-26 10:29:08.929461155 +0000 UTC m=+0.038820397 container create 65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:29:08 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:08.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:08 compute-0 systemd[1]: Started libpod-conmon-65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120.scope.
Jan 26 10:29:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a24286aeed995fc56462bff9c7dffbf5c6df076d1efdb918939f88e9332af99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a24286aeed995fc56462bff9c7dffbf5c6df076d1efdb918939f88e9332af99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a24286aeed995fc56462bff9c7dffbf5c6df076d1efdb918939f88e9332af99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a24286aeed995fc56462bff9c7dffbf5c6df076d1efdb918939f88e9332af99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a24286aeed995fc56462bff9c7dffbf5c6df076d1efdb918939f88e9332af99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:09 compute-0 podman[294440]: 2026-01-26 10:29:09.009482399 +0000 UTC m=+0.118841661 container init 65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:29:09 compute-0 podman[294440]: 2026-01-26 10:29:08.914307337 +0000 UTC m=+0.023666609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:29:09 compute-0 podman[294440]: 2026-01-26 10:29:09.01731042 +0000 UTC m=+0.126669652 container start 65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:29:09 compute-0 podman[294440]: 2026-01-26 10:29:09.023189388 +0000 UTC m=+0.132548630 container attach 65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:29:09 compute-0 ceph-mon[74456]: pgmap v1415: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 10:29:09 compute-0 ceph-mon[74456]: pgmap v1416: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 10:29:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 10:29:09 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:09 compute-0 distracted_torvalds[294456]: --> passed data devices: 0 physical, 1 LVM
Jan 26 10:29:09 compute-0 distracted_torvalds[294456]: --> All data devices are unavailable
Jan 26 10:29:09 compute-0 systemd[1]: libpod-65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120.scope: Deactivated successfully.
Jan 26 10:29:09 compute-0 podman[294440]: 2026-01-26 10:29:09.347978893 +0000 UTC m=+0.457338165 container died 65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a24286aeed995fc56462bff9c7dffbf5c6df076d1efdb918939f88e9332af99-merged.mount: Deactivated successfully.
Jan 26 10:29:09 compute-0 podman[294440]: 2026-01-26 10:29:09.404965367 +0000 UTC m=+0.514324649 container remove 65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_torvalds, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:29:09 compute-0 systemd[1]: libpod-conmon-65cb98c5b55d02f0a5cd3a847e01e465f5c34fd8e01d4b23636a111d1b910120.scope: Deactivated successfully.
Jan 26 10:29:09 compute-0 sudo[294307]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:09 compute-0 sudo[294481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:29:09 compute-0 sudo[294481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:09 compute-0 sudo[294481]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:09 compute-0 sudo[294506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- lvm list --format json
Jan 26 10:29:09 compute-0 sudo[294506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:09 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:09 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:09 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:09.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:09 compute-0 podman[294570]: 2026-01-26 10:29:09.959733714 +0000 UTC m=+0.038361464 container create d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 26 10:29:09 compute-0 systemd[1]: Started libpod-conmon-d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93.scope.
Jan 26 10:29:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:29:10 compute-0 podman[294570]: 2026-01-26 10:29:09.942440658 +0000 UTC m=+0.021068418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:29:10 compute-0 podman[294570]: 2026-01-26 10:29:10.038646329 +0000 UTC m=+0.117274079 container init d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 10:29:10 compute-0 podman[294570]: 2026-01-26 10:29:10.046000766 +0000 UTC m=+0.124628496 container start d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 10:29:10 compute-0 podman[294570]: 2026-01-26 10:29:10.049477041 +0000 UTC m=+0.128104771 container attach d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 10:29:10 compute-0 beautiful_wozniak[294586]: 167 167
Jan 26 10:29:10 compute-0 systemd[1]: libpod-d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93.scope: Deactivated successfully.
Jan 26 10:29:10 compute-0 podman[294570]: 2026-01-26 10:29:10.051760851 +0000 UTC m=+0.130388581 container died d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 26 10:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd4670cf46e7221eab12210685ff5970c0a7eeab784b8b7aaae0d1e88d9ced32-merged.mount: Deactivated successfully.
Jan 26 10:29:10 compute-0 podman[294570]: 2026-01-26 10:29:10.081809311 +0000 UTC m=+0.160437041 container remove d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:29:10 compute-0 systemd[1]: libpod-conmon-d605ae925e3d4e5d754f2189c76f554c79a5052ab02cf6a90837d2d5a27a2a93.scope: Deactivated successfully.
Jan 26 10:29:10 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 26 10:29:10 compute-0 podman[294609]: 2026-01-26 10:29:10.223267269 +0000 UTC m=+0.035555888 container create 98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 26 10:29:10 compute-0 systemd[1]: Started libpod-conmon-98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a.scope.
Jan 26 10:29:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44b1e635414ba98e0848df8059ff6f392a0dcf8aada35aa8014df709e0dd58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44b1e635414ba98e0848df8059ff6f392a0dcf8aada35aa8014df709e0dd58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44b1e635414ba98e0848df8059ff6f392a0dcf8aada35aa8014df709e0dd58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44b1e635414ba98e0848df8059ff6f392a0dcf8aada35aa8014df709e0dd58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:10 compute-0 podman[294609]: 2026-01-26 10:29:10.20993304 +0000 UTC m=+0.022221679 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:29:10 compute-0 podman[294609]: 2026-01-26 10:29:10.308988437 +0000 UTC m=+0.121277126 container init 98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_saha, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:29:10 compute-0 podman[294609]: 2026-01-26 10:29:10.317134157 +0000 UTC m=+0.129422776 container start 98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 10:29:10 compute-0 podman[294609]: 2026-01-26 10:29:10.320884208 +0000 UTC m=+0.133172827 container attach 98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 10:29:10 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:10 compute-0 nova_compute[254880]: 2026-01-26 10:29:10.325 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:10 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:10 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:10 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:10.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:10 compute-0 pensive_saha[294625]: {
Jan 26 10:29:10 compute-0 pensive_saha[294625]:     "0": [
Jan 26 10:29:10 compute-0 pensive_saha[294625]:         {
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "devices": [
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "/dev/loop3"
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             ],
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "lv_name": "ceph_lv0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "lv_size": "21470642176",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1a70b85d-e3fd-5814-8a6a-37ea00fcae30,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac85653c-ceaa-4fd5-80ce-94914596ed49,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "lv_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "name": "ceph_lv0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "tags": {
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.block_uuid": "JkcRgp-bPJc-dfT7-KzD0-kQZP-sOj5-t2SbgS",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.cluster_fsid": "1a70b85d-e3fd-5814-8a6a-37ea00fcae30",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.cluster_name": "ceph",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.crush_device_class": "",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.encrypted": "0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.osd_fsid": "ac85653c-ceaa-4fd5-80ce-94914596ed49",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.osd_id": "0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.type": "block",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.vdo": "0",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:                 "ceph.with_tpm": "0"
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             },
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "type": "block",
Jan 26 10:29:10 compute-0 pensive_saha[294625]:             "vg_name": "ceph_vg0"
Jan 26 10:29:10 compute-0 pensive_saha[294625]:         }
Jan 26 10:29:10 compute-0 pensive_saha[294625]:     ]
Jan 26 10:29:10 compute-0 pensive_saha[294625]: }
Jan 26 10:29:10 compute-0 systemd[1]: libpod-98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a.scope: Deactivated successfully.
Jan 26 10:29:10 compute-0 podman[294609]: 2026-01-26 10:29:10.615217643 +0000 UTC m=+0.427506262 container died 98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f44b1e635414ba98e0848df8059ff6f392a0dcf8aada35aa8014df709e0dd58-merged.mount: Deactivated successfully.
Jan 26 10:29:10 compute-0 podman[294609]: 2026-01-26 10:29:10.651759566 +0000 UTC m=+0.464048185 container remove 98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:29:10 compute-0 systemd[1]: libpod-conmon-98aec75acbcd54792b8d9bddec0fddab1ae9d644918bb116184a8182c833482a.scope: Deactivated successfully.
Jan 26 10:29:10 compute-0 sudo[294506]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:10 compute-0 sudo[294648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 10:29:10 compute-0 sudo[294648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:10 compute-0 sudo[294648]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:10 compute-0 sudo[294674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1a70b85d-e3fd-5814-8a6a-37ea00fcae30/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 1a70b85d-e3fd-5814-8a6a-37ea00fcae30 -- raw list --format json
Jan 26 10:29:10 compute-0 sudo[294674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:11 compute-0 ceph-mon[74456]: pgmap v1417: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 26 10:29:11 compute-0 podman[294738]: 2026-01-26 10:29:11.261605065 +0000 UTC m=+0.046070440 container create e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 26 10:29:11 compute-0 systemd[1]: Started libpod-conmon-e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4.scope.
Jan 26 10:29:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:29:11 compute-0 podman[294738]: 2026-01-26 10:29:11.322749832 +0000 UTC m=+0.107215207 container init e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 10:29:11 compute-0 podman[294738]: 2026-01-26 10:29:11.3304758 +0000 UTC m=+0.114941175 container start e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:29:11 compute-0 podman[294738]: 2026-01-26 10:29:11.236788338 +0000 UTC m=+0.021253803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:29:11 compute-0 podman[294738]: 2026-01-26 10:29:11.333259175 +0000 UTC m=+0.117724570 container attach e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 10:29:11 compute-0 elastic_curran[294755]: 167 167
Jan 26 10:29:11 compute-0 systemd[1]: libpod-e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4.scope: Deactivated successfully.
Jan 26 10:29:11 compute-0 podman[294738]: 2026-01-26 10:29:11.336591475 +0000 UTC m=+0.121056860 container died e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 10:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a9715ece821f33fbd17885847ad100ccb5b2620cf69976ec408d8eef1b926df-merged.mount: Deactivated successfully.
Jan 26 10:29:11 compute-0 podman[294738]: 2026-01-26 10:29:11.366819789 +0000 UTC m=+0.151285164 container remove e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 26 10:29:11 compute-0 podman[294752]: 2026-01-26 10:29:11.375355009 +0000 UTC m=+0.075476573 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 10:29:11 compute-0 systemd[1]: libpod-conmon-e5809a1e9ee3a2f6b0fcd44bed6f712b80e7ae00f39a3661c874ec7464af3dd4.scope: Deactivated successfully.
Jan 26 10:29:11 compute-0 nova_compute[254880]: 2026-01-26 10:29:11.516 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:11 compute-0 podman[294794]: 2026-01-26 10:29:11.535460859 +0000 UTC m=+0.049720830 container create b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_cerf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:29:11 compute-0 systemd[1]: Started libpod-conmon-b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d.scope.
Jan 26 10:29:11 compute-0 podman[294794]: 2026-01-26 10:29:11.510757823 +0000 UTC m=+0.025017884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 26 10:29:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 10:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fe058d9f50bb685173cb82479418ad59dba15400c66721e22afb45e54efd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fe058d9f50bb685173cb82479418ad59dba15400c66721e22afb45e54efd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fe058d9f50bb685173cb82479418ad59dba15400c66721e22afb45e54efd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fe058d9f50bb685173cb82479418ad59dba15400c66721e22afb45e54efd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 10:29:11 compute-0 podman[294794]: 2026-01-26 10:29:11.626677684 +0000 UTC m=+0.140937695 container init b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 10:29:11 compute-0 podman[294794]: 2026-01-26 10:29:11.635860272 +0000 UTC m=+0.150120243 container start b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 10:29:11 compute-0 podman[294794]: 2026-01-26 10:29:11.638839502 +0000 UTC m=+0.153099513 container attach b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 10:29:11 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:11 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:11 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:11.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:11 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:12 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:12 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:12 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:12 compute-0 lvm[294885]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:29:12 compute-0 lvm[294885]: VG ceph_vg0 finished
Jan 26 10:29:12 compute-0 hardcore_cerf[294810]: {}
Jan 26 10:29:12 compute-0 systemd[1]: libpod-b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d.scope: Deactivated successfully.
Jan 26 10:29:12 compute-0 podman[294794]: 2026-01-26 10:29:12.366212696 +0000 UTC m=+0.880472687 container died b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_cerf, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 26 10:29:12 compute-0 systemd[1]: libpod-b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d.scope: Consumed 1.132s CPU time.
Jan 26 10:29:12 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:12 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:12 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:12.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-286fe058d9f50bb685173cb82479418ad59dba15400c66721e22afb45e54efd9-merged.mount: Deactivated successfully.
Jan 26 10:29:12 compute-0 podman[294794]: 2026-01-26 10:29:12.414237349 +0000 UTC m=+0.928497350 container remove b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_cerf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 26 10:29:12 compute-0 systemd[1]: libpod-conmon-b2c4cac96c63aa83707be456cefa7c40b391772ae8b318b5c8f6e1ae2d24e80d.scope: Deactivated successfully.
Jan 26 10:29:12 compute-0 sudo[294674]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 10:29:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:12 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 10:29:12 compute-0 ceph-mon[74456]: log_channel(audit) log [INF] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:12 compute-0 sudo[294900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 10:29:12 compute-0 sudo[294900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:12 compute-0 sudo[294900]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:13 compute-0 ceph-mon[74456]: pgmap v1418: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:13 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' 
Jan 26 10:29:13 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:13.631Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:13 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:13 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:13 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:13.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:14 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:14 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:14 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:14 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:14.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:15 compute-0 ceph-mon[74456]: pgmap v1419: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:15 compute-0 nova_compute[254880]: 2026-01-26 10:29:15.329 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:15 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:15 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:15 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:15 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:15.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:16 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:16 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:16 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:16 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:16 compute-0 nova_compute[254880]: 2026-01-26 10:29:16.517 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:16 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:16] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:29:16 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:16] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:29:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:16 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:17 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:17 compute-0 ceph-mon[74456]: pgmap v1420: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:17 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:17.309Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:17 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:17 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:17 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:17.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:18 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:18 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:18 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:18.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Optimize plan auto_2026-01-26_10:29:18
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [balancer INFO root] do_upmap
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.meta', 'images', '.nfs', 'default.rgw.control', '.mgr']
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 10:29:18 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:29:18 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:29:18 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:29:18 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:18.944Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:19 compute-0 ceph-mon[74456]: pgmap v1421: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 26 10:29:19 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 10:29:19 compute-0 ceph-mgr[74755]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 10:29:19 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:19 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:19 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:20 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:20 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:20 compute-0 nova_compute[254880]: 2026-01-26 10:29:20.332 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:20 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:20 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:20 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:20.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:21 compute-0 ceph-mon[74456]: pgmap v1422: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:21 compute-0 nova_compute[254880]: 2026-01-26 10:29:21.519 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:21 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:21 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:21 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:21.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:21 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:22 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:22 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:22 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:22 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:22 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:22 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:22.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:23 compute-0 ceph-mon[74456]: pgmap v1423: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:23 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:23.633Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:23 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:23 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:23 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:23.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:24 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:24 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:24 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:24 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:25 compute-0 ceph-mon[74456]: pgmap v1424: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:25 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:25 compute-0 nova_compute[254880]: 2026-01-26 10:29:25.372 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:25 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:25 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:25 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:26 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:26 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:26 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:26 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:26.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:26 compute-0 nova_compute[254880]: 2026-01-26 10:29:26.577 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:26 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:29:26 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:26] "GET /metrics HTTP/1.1" 200 48534 "" "Prometheus/2.51.0"
Jan 26 10:29:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:26 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:27 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:27 compute-0 podman[294940]: 2026-01-26 10:29:27.150007492 +0000 UTC m=+0.084615604 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 10:29:27 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:27.310Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:27 compute-0 ceph-mon[74456]: pgmap v1425: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:27 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:27 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:27 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:27.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:28 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:28 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:28 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:28 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:28.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:28 compute-0 sudo[294968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:29:28 compute-0 sudo[294968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:28 compute-0 sudo[294968]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:28 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:28.945Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:29 compute-0 ceph-mon[74456]: pgmap v1426: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:29 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:29 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:29 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:29.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:30 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:30 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:30 compute-0 nova_compute[254880]: 2026-01-26 10:29:30.376 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:30 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:30 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:30 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:30.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:31 compute-0 ceph-mon[74456]: pgmap v1427: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:31 compute-0 nova_compute[254880]: 2026-01-26 10:29:31.579 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:31 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:31 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:31 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:31.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:31 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:32 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:32 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:32 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:32 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:32 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:32 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:32.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:33 compute-0 ceph-mon[74456]: pgmap v1428: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:33 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:33.634Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:33 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:29:33 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:33 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:33 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:33 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:33.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:34 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:34 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:34 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:34 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:34 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:34.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:35 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:35 compute-0 nova_compute[254880]: 2026-01-26 10:29:35.379 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:35 compute-0 ceph-mon[74456]: pgmap v1429: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:35 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:35 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 10:29:35 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:35.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 10:29:36 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:36 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:36 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:36 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:36.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:36 compute-0 nova_compute[254880]: 2026-01-26 10:29:36.582 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:36 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:36] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:29:36 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:36] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:29:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:36 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:37 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:37 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:37.311Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:37 compute-0 ceph-mon[74456]: pgmap v1430: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:37 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:37 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:37 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:37.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:38 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:38 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:38 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:38 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:38.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:38 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:38.946Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:38 compute-0 sshd-session[295003]: Accepted publickey for zuul from 192.168.122.10 port 58716 ssh2: ECDSA SHA256:3+mD6W9podl8Ei5P+Dtw+049tIr7OsvnVW8okhUeQyk
Jan 26 10:29:38 compute-0 systemd-logind[787]: New session 59 of user zuul.
Jan 26 10:29:38 compute-0 systemd[1]: Started Session 59 of User zuul.
Jan 26 10:29:38 compute-0 sshd-session[295003]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 10:29:39 compute-0 sudo[295007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 26 10:29:39 compute-0 sudo[295007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 10:29:39 compute-0 ceph-mon[74456]: pgmap v1431: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:39 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:39 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:39 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:39.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:40 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:40 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:40 compute-0 nova_compute[254880]: 2026-01-26 10:29:40.413 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:40 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:40 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:40 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:40.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:41 compute-0 ceph-mon[74456]: pgmap v1432: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17913 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:41 compute-0 nova_compute[254880]: 2026-01-26 10:29:41.633 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27683 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:41 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:41 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:41 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:41.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:41 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17922 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:41 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:42 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:42 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:42 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:42 compute-0 podman[295217]: 2026-01-26 10:29:42.147584321 +0000 UTC m=+0.065093956 container health_status 8bf49d6b021d0af148cfb795b3792ebd2e4a652c8d360ad6cfedd22a20e41d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 10:29:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27692 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27322 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:42 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:42 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:42.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:42 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 26 10:29:42 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304949107' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mon[74456]: from='client.17913 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mon[74456]: from='client.27683 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mon[74456]: from='client.27307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mon[74456]: from='client.17922 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mon[74456]: pgmap v1433: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:42 compute-0 ceph-mon[74456]: from='client.27692 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mon[74456]: from='client.27322 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:42 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/304949107' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:29:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2606455545' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:29:43 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1179573448' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 10:29:43 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:43.635Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:43 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:43 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:43 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:44 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:44 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:44 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:44 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:44.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:44 compute-0 ceph-mon[74456]: pgmap v1434: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:45 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:45 compute-0 nova_compute[254880]: 2026-01-26 10:29:45.454 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:45 compute-0 ovs-vsctl[295363]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 26 10:29:45 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:45 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:45 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:45.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:45 compute-0 nova_compute[254880]: 2026-01-26 10:29:45.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:46 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:46 compute-0 virtqemud[254348]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 26 10:29:46 compute-0 virtqemud[254348]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 26 10:29:46 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:46 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:46 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:46.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:46 compute-0 virtqemud[254348]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 26 10:29:46 compute-0 ceph-mon[74456]: pgmap v1435: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:46 compute-0 nova_compute[254880]: 2026-01-26 10:29:46.634 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:46 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:46] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:29:46 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:46] "GET /metrics HTTP/1.1" 200 48536 "" "Prometheus/2.51.0"
Jan 26 10:29:46 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: cache status {prefix=cache status} (starting...)
Jan 26 10:29:46 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:46 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:47 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:47 compute-0 lvm[295693]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 10:29:47 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: client ls {prefix=client ls} (starting...)
Jan 26 10:29:47 compute-0 lvm[295693]: VG ceph_vg0 finished
Jan 26 10:29:47 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:47 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:47.312Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27716 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17943 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:47 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: damage ls {prefix=damage ls} (starting...)
Jan 26 10:29:47 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:47 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:47 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:47 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:47 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump loads {prefix=dump loads} (starting...)
Jan 26 10:29:47 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 26 10:29:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:47 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 26 10:29:47 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2963176918' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:47 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27728 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1597042348' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2963176918' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17958 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 10:29:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62027242' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27740 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:48 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:48 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:48 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17970 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27346 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 26 10:29:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513814966' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:48 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:29:48 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 26 10:29:48 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27755 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 10:29:48 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.17985 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:48 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:48.948Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:48 compute-0 nova_compute[254880]: 2026-01-26 10:29:48.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:48 compute-0 sudo[295997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 26 10:29:48 compute-0 sudo[295997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 10:29:48 compute-0 sudo[295997]: pam_unix(sudo:session): session closed for user root
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:48.998 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:48.998 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:48.999 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:48.999 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:48.999 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.27716 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.17943 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.27728 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.17958 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: pgmap v1436: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3490216551' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/62027242' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.27740 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3430578706' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/513814966' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 26 10:29:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27373 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: ops {prefix=ops} (starting...)
Jan 26 10:29:49 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 26 10:29:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223797148' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 26 10:29:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403931818' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:29:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/70919432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.443 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:29:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27394 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18021 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27788 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.592 254884 WARNING nova.virt.libvirt.driver [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.593 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4395MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.593 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.594 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.680 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.680 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 10:29:49 compute-0 nova_compute[254880]: 2026-01-26 10:29:49.700 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 10:29:49 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 26 10:29:49 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572298696' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: session ls {prefix=session ls} (starting...)
Jan 26 10:29:49 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu Can't run that command on an inactive MDS!
Jan 26 10:29:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27409 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:49 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:49 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:49.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27418 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:49 compute-0 ceph-mds[97403]: mds.cephfs.compute-0.zhqpiu asok_command: status {prefix=status} (starting...)
Jan 26 10:29:49 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18042 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.17970 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.27346 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.27755 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.17985 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3446623633' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.27373 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1223797148' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3278578878' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2403931818' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1209062128' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3162243807' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/70919432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3572298696' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/926128577' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4111243696' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 10:29:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2683446461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 26 10:29:50 compute-0 nova_compute[254880]: 2026-01-26 10:29:50.184 254884 DEBUG oslo_concurrency.processutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 10:29:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1733620249' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:50 compute-0 nova_compute[254880]: 2026-01-26 10:29:50.190 254884 DEBUG nova.compute.provider_tree [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed in ProviderTree for provider: 0dd9ba26-1c92-4319-953d-4e0ed59143cf update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 10:29:50 compute-0 nova_compute[254880]: 2026-01-26 10:29:50.205 254884 DEBUG nova.scheduler.client.report [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Inventory has not changed for provider 0dd9ba26-1c92-4319-953d-4e0ed59143cf based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 10:29:50 compute-0 nova_compute[254880]: 2026-01-26 10:29:50.206 254884 DEBUG nova.compute.resource_tracker [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 10:29:50 compute-0 nova_compute[254880]: 2026-01-26 10:29:50.206 254884 DEBUG oslo_concurrency.lockutils [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:29:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 26 10:29:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 26 10:29:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908729226' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:50 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:50 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:50 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:50.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:50 compute-0 nova_compute[254880]: 2026-01-26 10:29:50.456 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 26 10:29:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194018511' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27454 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:50 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 26 10:29:50 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332807695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.27394 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.18021 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.27788 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.27409 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.27418 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.18042 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: pgmap v1437: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2683446461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1733620249' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/266995504' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2091732254' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1765323621' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1646311646' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2908729226' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2194018511' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2590876303' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1140252076' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2610141522' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1332807695' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27484 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 10:29:51 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2634675257' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:29:51 compute-0 nova_compute[254880]: 2026-01-26 10:29:51.206 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:51 compute-0 nova_compute[254880]: 2026-01-26 10:29:51.207 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 10:29:51 compute-0 nova_compute[254880]: 2026-01-26 10:29:51.207 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 10:29:51 compute-0 nova_compute[254880]: 2026-01-26 10:29:51.236 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 10:29:51 compute-0 nova_compute[254880]: 2026-01-26 10:29:51.237 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:51 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27881 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T10:29:51.253+0000 7ff0f59d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:29:51 compute-0 ceph-mgr[74755]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:29:51 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18114 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:51 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T10:29:51.254+0000 7ff0f59d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:29:51 compute-0 ceph-mgr[74755]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:29:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 26 10:29:51 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870307844' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:29:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 26 10:29:51 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:51 compute-0 nova_compute[254880]: 2026-01-26 10:29:51.635 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:51 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 26 10:29:51 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3034132189' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:29:51 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:51 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:51 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:51 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:52 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 26 10:29:52 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117585736' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 26 10:29:52 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/27549982' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.27454 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.27484 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2634675257' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/965486565' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2839110286' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.27881 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.18114 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2870307844' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3561081167' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2903462977' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2080751942' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2842339571' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3034132189' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3117585736' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1562471956' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/27549982' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 26 10:29:52 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2923571804' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18147 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27550 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: 2026-01-26T10:29:52.399+0000 7ff0f59d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:29:52 compute-0 ceph-mgr[74755]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 10:29:52 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27926 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:52 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:52 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:52.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:52 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 26 10:29:52 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4016153586' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27941 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18162 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:52 compute-0 nova_compute[254880]: 2026-01-26 10:29:52.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:52 compute-0 nova_compute[254880]: 2026-01-26 10:29:52.997 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/352923986' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: pgmap v1438: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3441896431' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2923571804' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.18147 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.27550 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.27926 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2655571253' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4016153586' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/657819002' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2740923979' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4066770674' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 26 10:29:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425527103' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18186 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27604 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:08.412154+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85671936 unmapped: 3497984 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:09.412284+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:10.412474+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:11.412688+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:12.412865+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:13.413001+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 3465216 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:14.413128+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:15.413255+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:16.413386+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:17.413544+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:18.413684+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:19.413810+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:20.413972+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:21.414167+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:22.414344+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd6c2c00 session 0x55c5bf1b63c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:23.414499+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:24.414633+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:25.414768+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:26.414912+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:27.415074+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:28.415226+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:29.415367+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:30.415498+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:31.415739+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984618 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 3457024 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:32.415905+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:33.416168+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.331504822s of 32.434001923s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:34.416310+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:35.416428+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:36.416595+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984750 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:37.416778+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:38.416964+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:39.417131+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c0ab41e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c0132000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:40.417317+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:41.417582+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:42.417715+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:43.417859+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:44.418037+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:45.418250+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:46.418418+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:47.418596+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.164649963s of 14.661804199s, submitted: 3
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:48.418733+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:49.418922+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:50.419051+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:51.419227+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987642 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:52.419374+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5bff8f860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5c0abaf00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:53.419508+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:54.419650+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:55.419792+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:56.419956+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:57.420126+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:58.420263+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:57:59.420420+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 3448832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:00.420549+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3440640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:01.420903+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3440640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:02.421035+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 3440640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:03.421156+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.274815559s of 15.399731636s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:04.421272+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:05.421390+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:06.421562+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987774 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:07.421856+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:08.422064+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:09.422215+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:10.422343+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:11.422478+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988695 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:12.422613+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:13.422781+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:14.422904+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:15.423057+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:16.423225+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988695 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:17.423683+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 3432448 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:18.423815+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.839399338s of 14.924924850s, submitted: 4
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 3424256 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:19.423955+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:20.424107+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:21.424251+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:22.424435+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:23.424575+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:24.424707+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:25.424839+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:26.424990+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:27.425167+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:28.425317+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:29.425485+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:30.425605+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:31.425808+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:32.426045+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:33.426217+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:34.426338+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:35.426534+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:36.426745+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:37.426983+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:38.427151+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:39.427336+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:40.427490+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:41.427656+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:42.427832+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:43.428001+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:44.428227+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:45.428572+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:46.428735+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 3416064 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:47.429266+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:48.429453+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:49.429575+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c06c0400 session 0x55c5bd708d20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0ab4d20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:50.429785+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:51.429939+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:52.430171+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:53.430370+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:54.430586+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:55.430727+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:56.430864+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988563 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:57.431080+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 3407872 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:58.431272+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:58:59.431515+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:00.431665+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.002922058s of 42.007659912s, submitted: 1
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:01.431849+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988695 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:02.432055+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:03.432237+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:04.432462+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:05.432670+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:06.432817+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990207 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:07.432993+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:08.433131+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:09.433447+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:10.433582+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:11.433718+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990207 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:12.433858+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.097983360s of 12.105645180s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:13.434033+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:14.434256+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 3399680 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:15.434393+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:16.434526+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989616 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:17.434721+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:18.434899+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:19.435103+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:20.435270+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:21.435409+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:22.435548+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:23.435694+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:24.435900+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:25.436128+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:26.436301+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 3391488 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:27.436503+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:28.436656+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:29.436786+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:30.436921+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:31.437071+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:32.437224+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:33.437387+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:34.437517+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:35.437655+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:36.437785+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:37.437965+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:38.438115+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:39.438301+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:40.438474+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:41.438612+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:42.438902+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:43.439038+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:44.439180+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:45.439367+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:46.439509+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:47.440913+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:48.441038+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5bd825c20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5bff8c960
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:49.441159+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:50.441405+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85786624 unmapped: 3383296 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:51.441539+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9188 writes, 34K keys, 9188 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9188 writes, 2303 syncs, 3.99 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 773 writes, 1164 keys, 773 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 773 writes, 386 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c5bbdc7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3350528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:52.441761+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3350528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:53.441984+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85819392 unmapped: 3350528 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:54.442175+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:55.442399+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:56.442535+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989484 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:57.442709+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 3342336 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:58.442851+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.497116089s of 46.570705414s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T09:59:59.442984+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:00.443093+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:01.443262+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 3334144 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989616 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:02.443403+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:03.443545+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:04.443684+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:05.443823+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:06.443954+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991128 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:07.444187+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:08.444463+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:09.444768+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:10.444927+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:11.445047+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990537 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:12.445169+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:13.445436+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:14.445586+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:15.445753+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.288171768s of 16.321334839s, submitted: 3
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread fragmentation_score=0.000031 took=0.000043s
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:16.445896+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:17.446069+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:18.446289+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:19.446439+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 3325952 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:20.446564+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:21.446702+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:22.446837+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:23.447056+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:24.447221+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:25.447352+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:26.447493+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:27.447640+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:28.447794+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 3317760 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:29.447927+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:30.448153+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:31.448392+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:32.448530+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:33.448722+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:34.448960+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:35.449181+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:36.449424+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:37.449603+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:38.449761+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 3309568 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:39.449896+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:40.450017+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:41.450148+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bd799a40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:42.450322+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:43.450446+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:44.450582+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 3301376 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:45.450727+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:46.450873+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:47.451064+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:48.451400+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:49.451625+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:50.451751+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:51.451886+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990405 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:52.452050+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.008251190s of 37.012298584s, submitted: 1
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:53.452228+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:54.452403+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:55.452574+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:56.452737+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993561 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:57.452907+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5c0abb4a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314400 session 0x55c5c0aa74a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:58.453043+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:00:59.453131+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:00.453252+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:01.453388+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993561 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:02.453559+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:03.453737+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:04.453915+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:05.454038+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:06.454410+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85876736 unmapped: 3293184 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993561 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:07.454596+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:08.454748+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.823002815s of 16.056192398s, submitted: 3
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:09.454877+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:10.455033+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:11.455253+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995073 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:12.455481+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:13.455855+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:14.456109+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:15.456276+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:16.456527+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995073 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:17.456846+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:18.457053+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:19.457217+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:20.457512+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.591454506s of 12.605092049s, submitted: 4
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:21.457649+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:22.770733+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:23.770851+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:24.771186+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:25.771438+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:26.771870+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:27.772446+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:28.772648+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:29.772836+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:30.773031+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:31.773264+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:32.773419+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:33.773601+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:34.773984+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:35.774147+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:36.774289+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:37.774494+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:38.774676+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 3284992 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:39.774936+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:40.775087+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:41.775231+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:42.775371+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:43.775564+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:44.775735+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:45.775938+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:46.776168+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:47.776457+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:48.776631+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 3276800 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.426548004s of 27.788986206s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:49.776810+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 4276224 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:50.776961+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:51.777103+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:52.777274+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:53.777476+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:54.777652+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:55.777830+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:56.778032+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:57.778284+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:58.778389+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:01:59.778574+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:00.778747+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:01.778886+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:02.779003+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:03.779137+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:04.779305+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:05.779484+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:06.779635+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:07.779829+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:08.779955+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:09.780097+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:10.780291+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:11.780491+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:12.780701+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:13.780864+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:14.781007+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:15.781168+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:16.781380+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:17.781553+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:18.781710+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:19.781844+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:20.782020+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:21.782254+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:22.782451+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:23.782637+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:24.782773+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:25.782940+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.075836182s of 37.604110718s, submitted: 189
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:26.783088+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4243456 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993846 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:27.783231+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4243456 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:28.783397+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:29.783519+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:30.783607+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:31.783709+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:32.783851+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:33.783976+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:34.784112+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:35.784284+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:36.784516+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:37.784717+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:38.784851+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:39.785046+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:40.785250+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:41.785394+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:42.785525+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:43.785665+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:44.785804+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:45.785932+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:46.786065+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:47.786265+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:48.786398+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:49.786531+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:50.786675+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:51.786811+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:52.786961+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:53.787102+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:54.787324+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:55.787488+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:56.787663+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:57.787869+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:58.788059+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:02:59.788215+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:00.788357+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:01.788543+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:02.788707+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:03.788865+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:04.789024+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:05.789166+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:06.789303+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c049e3c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bfd73c20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:07.789531+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:08.789668+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:09.789817+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:10.789972+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:11.790108+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:12.790251+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993759 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:13.790422+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:14.790596+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:15.790723+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:16.790904+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:17.791071+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.814117432s of 51.125740051s, submitted: 118
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993891 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:18.791263+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd314c00 session 0x55c5c06ded20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bfdf4c00 session 0x55c5c048b0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:19.791383+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:20.791526+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:21.791711+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:22.791856+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993891 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:23.792004+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:24.792160+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:25.792316+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:26.792588+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:27.792807+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995403 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:28.792955+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:29.793102+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.486224174s of 12.492043495s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:30.793312+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:31.793445+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:32.793586+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995403 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:33.793717+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:34.793883+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:35.794005+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c049f2c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c06cd4a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:36.794147+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:37.794363+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994812 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:38.794528+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:39.794685+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:40.794866+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:41.795017+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:42.795179+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994812 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:43.795369+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.910851479s of 13.921665192s, submitted: 3
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:44.795508+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:45.795641+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:46.795777+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:47.795941+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994812 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:48.796067+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:49.796462+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:50.796781+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:51.797066+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:52.797339+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:53.797515+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:54.797630+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:55.797801+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:56.797965+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:57.798172+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:58.798354+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:03:59.798520+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:00.798948+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:01.799301+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:02.799597+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:03.799742+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.066726685s of 20.080118179s, submitted: 4
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:04.799954+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:05.800248+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:06.800460+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:07.800709+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:08.800957+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:09.801131+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:10.801334+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:11.801567+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:12.801827+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:13.802009+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:14.802298+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:15.802510+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:16.802753+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:17.802988+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:18.803140+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:19.803395+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:20.803640+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:21.803789+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:22.804589+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:23.804763+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:24.804968+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:25.805443+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:26.805784+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:27.806484+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:28.806939+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:29.807244+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:30.807399+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:31.807652+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:32.807817+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:33.807960+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:34.808141+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:35.808355+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:36.808601+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:37.809104+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:38.809345+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:39.809545+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:40.809812+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:41.809987+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:42.810124+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:43.810470+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:44.810607+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:45.810834+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:46.810971+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:47.811264+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:48.811730+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:49.811860+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:50.811990+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bf1de780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:51.812115+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:52.812278+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:53.812453+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:54.812615+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:55.812762+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff8b0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:56.813167+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:57.813415+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995601 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:58.813636+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:04:59.813920+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:00.814379+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:01.814740+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 58.070049286s of 58.074813843s, submitted: 1
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:02.814940+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995733 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:03.815147+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:04.815393+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:05.815664+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:06.815879+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:07.816098+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997377 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:08.816345+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:09.816509+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc65e000/0x0/0x4ffc00000, data 0xf3114/0x1ae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:10.816700+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:11.816906+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.218690872s of 10.236251831s, submitted: 5
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:12.817079+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002064 data_alloc: 218103808 data_used: 167936
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:13.817272+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _renew_subs
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:14.817489+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86827008 unmapped: 20176896 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fc656000/0x0/0x4ffc00000, data 0xf735e/0x1b5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _renew_subs
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 146 ms_handle_reset con 0x55c5c06c1000 session 0x55c5c049f0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:15.817736+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86835200 unmapped: 20168704 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbe52000/0x0/0x4ffc00000, data 0x8f946b/0x9b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:16.817896+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 20135936 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 147 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5c06cd680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:17.818166+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 20127744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099850 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:18.818476+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 20127744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:19.818761+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 20127744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:20.819299+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 20119552 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9dc000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:21.819522+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 20119552 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9dc000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:22.819684+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:23.819869+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:24.820033+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:25.820226+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:26.820416+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:27.820670+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:28.821072+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:29.821413+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:30.821607+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:31.821907+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:32.822130+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:33.822486+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:34.822769+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:35.823069+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:36.823383+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:37.823674+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:38.823840+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5bd314400 session 0x55c5c0abaf00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0ab5c20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:39.824116+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:40.824302+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:41.824489+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:42.824654+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:43.824936+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:44.825084+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:45.825255+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:46.825412+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:47.825593+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097982 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:48.825725+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:49.825873+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 37.039752960s of 37.658199310s, submitted: 45
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:50.826001+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:51.826156+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:52.826533+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098114 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:53.826682+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:54.826831+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:55.826986+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:56.827150+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:57.827357+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099626 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:58.827509+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:05:59.827847+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:00.828110+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:01.828470+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 20111360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c0ab4d20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c06c1800 session 0x55c5c0ab41e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5bd314400 session 0x55c5bd708d20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.439785004s of 12.535116196s, submitted: 3
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:02.828776+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 20103168 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098312 data_alloc: 218103808 data_used: 172032
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:03.828915+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bf7165a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 12345344 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 ms_handle_reset con 0x55c5c067f400 session 0x55c5c06df0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:04.829248+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94658560 unmapped: 12345344 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:05.829390+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb9de000/0x0/0x4ffc00000, data 0xd6d545/0xe2e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 148 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94674944 unmapped: 12328960 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:06.829590+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94683136 unmapped: 12320768 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:07.829798+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c06cda40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94920704 unmapped: 12083200 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5c06c1c00 session 0x55c5c0aa65a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5bd314400 session 0x55c5bff8b0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176555 data_alloc: 218103808 data_used: 6991872
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5bfe27e00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 150 ms_handle_reset con 0x55c5c067f400 session 0x55c5bf1d45a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:08.829991+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:09.830145+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:10.830307+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fb433000/0x0/0x4ffc00000, data 0x13147e3/0x13d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:11.830421+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:12.830703+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _renew_subs
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.216803551s of 10.641160011s, submitted: 41
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179129 data_alloc: 218103808 data_used: 6991872
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:13.831013+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 94912512 unmapped: 12091392 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bff8d860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bf1bc5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:14.831121+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:15.831305+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:16.831489+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:17.831690+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218497 data_alloc: 234881024 data_used: 12800000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:18.831826+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:19.831956+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:20.832150+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:21.832353+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:22.832512+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218497 data_alloc: 234881024 data_used: 12800000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:23.832664+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 100294656 unmapped: 6709248 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:24.832803+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.587004662s of 11.596732140s, submitted: 10
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb42f000/0x0/0x4ffc00000, data 0x13167b5/0x13dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [1])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 2899968 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:25.832952+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 103112704 unmapped: 3891200 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:26.833094+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4faf22000/0x0/0x4ffc00000, data 0x18247b5/0x18ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 103178240 unmapped: 3825664 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:27.833262+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 2703360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261579 data_alloc: 234881024 data_used: 13148160
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:28.833397+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd314400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bd7f7c20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d4400 session 0x55c5bcb6eb40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 2703360 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:29.833578+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 2785280 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:30.833711+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 2785280 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:31.833846+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 2785280 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:32.834015+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262051 data_alloc: 234881024 data_used: 13152256
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:33.834160+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:34.834331+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:35.834466+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:36.834616+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:37.834901+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262051 data_alloc: 234881024 data_used: 13152256
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:38.835109+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:39.835311+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.887156487s of 15.051704407s, submitted: 64
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:40.835575+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:41.835815+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:42.835985+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261883 data_alloc: 234881024 data_used: 13152256
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:43.836124+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:44.836434+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105250816 unmapped: 1753088 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:45.836673+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 1744896 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:46.836982+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 1720320 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bf1ce780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1c00 session 0x55c5bf1b6000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bff8e000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:47.837186+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 105283584 unmapped: 1720320 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bff8c5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264907 data_alloc: 234881024 data_used: 13676544
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:48.837433+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106332160 unmapped: 671744 heap: 107003904 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d4400 session 0x55c5bff8da40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0c00 session 0x55c5bd8dfc20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:49.837581+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c049ed20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1c00 session 0x55c5bfd43a40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bd78bc20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0c00 session 0x55c5bd8252c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 8249344 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:50.837786+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 8249344 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:51.837995+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e5000/0x0/0x4ffc00000, data 0x1eb07c5/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.295121193s of 12.392032623s, submitted: 28
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d4400 session 0x55c5c087f0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:52.838136+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321487 data_alloc: 234881024 data_used: 13676544
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:53.838280+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bfd730e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:54.838519+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e5000/0x0/0x4ffc00000, data 0x1eb07c5/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bd7985a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bfe27a40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:55.838655+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106209280 unmapped: 8282112 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:56.838778+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 8265728 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:57.839056+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 1966080 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369746 data_alloc: 234881024 data_used: 20332544
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:58.839211+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bf7165a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bfd42960
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 1966080 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:06:59.839340+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 1933312 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:00.839507+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 1933312 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:01.839642+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112558080 unmapped: 1933312 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:02.839762+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 1900544 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:03.839887+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1369746 data_alloc: 234881024 data_used: 20332544
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 1867776 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:04.840060+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:05.840226+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:06.840391+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:07.840554+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 1859584 heap: 114491392 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.207042694s of 16.260372162s, submitted: 8
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:08.840707+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382754 data_alloc: 234881024 data_used: 20353024
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115286016 unmapped: 2351104 heap: 117637120 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f92e3000/0x0/0x4ffc00000, data 0x1eb07f8/0x1f79000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:09.840862+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 4595712 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:10.841038+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f877e000/0x0/0x4ffc00000, data 0x2a157f8/0x2ade000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115269632 unmapped: 4595712 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:11.841230+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 3383296 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:12.841399+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 3383296 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:13.841569+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464462 data_alloc: 234881024 data_used: 21639168
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116482048 unmapped: 3383296 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:14.841733+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 3375104 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:15.841946+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 3375104 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:16.842179+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 3375104 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:17.842426+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 3366912 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:18.842552+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464478 data_alloc: 234881024 data_used: 21639168
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.643788338s of 10.466860771s, submitted: 84
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 3350528 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:19.842684+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 3350528 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:20.842840+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 3317760 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:21.842999+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 3317760 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8776000/0x0/0x4ffc00000, data 0x2a1d7f8/0x2ae6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5c048ab40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0c00 session 0x55c5bd8dc5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:22.843137+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd825e00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3efc00 session 0x55c5c00683c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 3309568 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:23.843342+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0800 session 0x55c5bd825680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bf1c2f00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278023 data_alloc: 234881024 data_used: 13680640
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0000 session 0x55c5bff8c3c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:24.843607+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:25.843817+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f996b000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:26.844257+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f996b000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:27.844460+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:28.844610+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277263 data_alloc: 234881024 data_used: 13676544
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f996b000/0x0/0x4ffc00000, data 0x182a7b5/0x18f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:29.844743+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:30.844903+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.404096603s of 12.254305840s, submitted: 41
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c049f2c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:31.845058+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 8118272 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:32.845250+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf51ab40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:33.845437+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3efc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152333 data_alloc: 218103808 data_used: 7512064
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:34.845572+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:35.845725+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:36.845879+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:37.846061+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:38.846259+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153845 data_alloc: 218103808 data_used: 7512064
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:39.846422+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:40.846815+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:41.846967+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:42.847268+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:43.847466+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153845 data_alloc: 218103808 data_used: 7512064
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:44.847613+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:45.847809+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:46.847992+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:47.848271+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:48.848449+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153845 data_alloc: 218103808 data_used: 7512064
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:49.848638+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.363817215s of 18.559324265s, submitted: 22
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:50.848859+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:51.849020+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:52.849251+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:53.849498+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153289 data_alloc: 218103808 data_used: 7512064
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:54.849738+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:55.849957+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:56.850107+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:57.850280+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:58.850448+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153289 data_alloc: 218103808 data_used: 7512064
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:07:59.850694+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:00.850876+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:01.851074+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107847680 unmapped: 12017664 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:02.851253+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bd78a1e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108109824 unmapped: 11755520 heap: 119865344 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.769110680s of 13.776129723s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:03.851415+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bfd72b40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196133 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:04.851575+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f6a000/0x0/0x4ffc00000, data 0x122e743/0x12f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:05.851725+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5c06df860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:06.851878+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f6a000/0x0/0x4ffc00000, data 0x122e743/0x12f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:07.852085+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd7983c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:08.852266+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0aba1e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197947 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5c048b2c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:09.852563+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:10.852777+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 13901824 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:11.852933+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f69000/0x0/0x4ffc00000, data 0x122e753/0x12f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:12.853084+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:13.853244+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228803 data_alloc: 234881024 data_used: 11476992
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:14.853395+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:15.853778+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:16.854381+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:17.854627+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f69000/0x0/0x4ffc00000, data 0x122e753/0x12f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:18.854823+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228803 data_alloc: 234881024 data_used: 11476992
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108437504 unmapped: 13541376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:19.854992+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f69000/0x0/0x4ffc00000, data 0x122e753/0x12f3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 13737984 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:20.855265+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 13737984 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:21.855444+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 13737984 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:22.855683+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.122682571s of 19.164644241s, submitted: 15
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 9748480 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:23.855856+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299977 data_alloc: 234881024 data_used: 11915264
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:24.856007+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 11649024 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:25.856255+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 11649024 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:26.856411+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 11649024 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:27.856597+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 11640832 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95f6000/0x0/0x4ffc00000, data 0x1ba1753/0x1c66000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:28.856782+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 11640832 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305777 data_alloc: 234881024 data_used: 12337152
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:29.856922+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:30.857107+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:31.857273+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:32.857417+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 11501568 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95d2000/0x0/0x4ffc00000, data 0x1bc5753/0x1c8a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:33.857560+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304817 data_alloc: 234881024 data_used: 12337152
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:34.857715+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:35.857872+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:36.858041+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110485504 unmapped: 11493376 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.739342690s of 14.036973953s, submitted: 112
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:37.858258+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95cc000/0x0/0x4ffc00000, data 0x1bcb753/0x1c90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:38.858391+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304561 data_alloc: 234881024 data_used: 12337152
Jan 26 10:29:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445155982' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:39.858534+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:40.858678+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:41.858837+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95cc000/0x0/0x4ffc00000, data 0x1bcb753/0x1c90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95cc000/0x0/0x4ffc00000, data 0x1bcb753/0x1c90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:42.858990+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11485184 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:43.859127+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304649 data_alloc: 234881024 data_used: 12337152
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:44.859300+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95c9000/0x0/0x4ffc00000, data 0x1bce753/0x1c93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:45.859416+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:46.859574+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:47.859740+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 11476992 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.978490829s of 11.007252693s, submitted: 4
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:48.859876+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306025 data_alloc: 234881024 data_used: 12353536
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:49.860021+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f95bd000/0x0/0x4ffc00000, data 0x1bda753/0x1c9f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:50.860145+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:51.860283+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:52.860400+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 11427840 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bff8ba40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0800 session 0x55c5c0ab4780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c09990e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:53.860513+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:54.860644+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:55.860773+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:56.860895+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:57.861062+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:58.861258+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:08:59.861389+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:00.861521+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:01.861649+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:02.861793+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:03.862002+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:04.862125+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:05.862265+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:06.862455+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:07.862745+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:08.862881+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:09.863007+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:10.863145+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:11.863275+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:12.863408+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:13.863542+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162034 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:14.863679+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:15.863794+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:16.863949+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:17.864074+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 13942784 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.849651337s of 30.339509964s, submitted: 14
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5c06cc000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:18.864182+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173382 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:19.864268+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:20.864353+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa35b000/0x0/0x4ffc00000, data 0xe3d743/0xf01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:21.864455+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:22.864572+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa35b000/0x0/0x4ffc00000, data 0xe3d743/0xf01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:23.864710+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bff8c1e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173382 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:24.864869+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bff912c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 13778944 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c1400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c1400 session 0x55c5bfd734a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:25.864944+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c06de1e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:26.865080+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:27.865235+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:28.865393+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182408 data_alloc: 218103808 data_used: 7618560
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:29.865547+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:30.865738+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:31.865885+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:32.866038+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:33.866117+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182408 data_alloc: 218103808 data_used: 7618560
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:34.866306+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa336000/0x0/0x4ffc00000, data 0xe61753/0xf26000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:35.866458+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:36.866551+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 14344192 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:37.866762+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 14376960 heap: 121978880 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.720226288s of 20.127946854s, submitted: 8
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa2b8000/0x0/0x4ffc00000, data 0xedf753/0xfa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,9,3])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:38.866912+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108675072 unmapped: 15409152 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240552 data_alloc: 218103808 data_used: 7897088
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:39.867038+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:40.867170+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:41.867318+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1624753/0x16e9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:42.867482+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:43.867680+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 15908864 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249194 data_alloc: 218103808 data_used: 7888896
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:44.867838+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b73000/0x0/0x4ffc00000, data 0x1624753/0x16e9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 15843328 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:45.868014+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:46.868161+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:47.868393+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:48.868542+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b4f000/0x0/0x4ffc00000, data 0x1648753/0x170d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246418 data_alloc: 218103808 data_used: 7888896
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:49.868679+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16023552 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:50.868823+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.267821312s of 12.903651237s, submitted: 71
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:51.868962+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 3013 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1620 writes, 4804 keys, 1620 commit groups, 1.0 writes per commit group, ingest: 4.82 MB, 0.01 MB/s
                                           Interval WAL: 1620 writes, 710 syncs, 2.28 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:52.869114+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b44000/0x0/0x4ffc00000, data 0x1653753/0x1718000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:53.869309+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 15941632 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246586 data_alloc: 218103808 data_used: 7888896
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:54.869467+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 15933440 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:55.869583+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 15933440 heap: 124084224 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:56.869720+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bff8f0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 29089792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:57.869909+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9aa4000/0x0/0x4ffc00000, data 0x16f3753/0x17b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29065216 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:58.870120+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29065216 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2306753/0x23cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1338864 data_alloc: 218103808 data_used: 7888896
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:09:59.870277+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 29065216 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bfe263c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:00.870400+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2306753/0x23cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5bd709680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107700224 unmapped: 29057024 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:01.870520+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9800 session 0x55c5bfd42000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e91000/0x0/0x4ffc00000, data 0x2306753/0x23cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.908476830s of 11.043089867s, submitted: 14
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd7ebe00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 28835840 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:02.870645+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 28835840 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:03.870771+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 24903680 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413071 data_alloc: 234881024 data_used: 17989632
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:04.870977+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6c000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:05.871167+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:06.871346+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:07.871596+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:08.871805+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434047 data_alloc: 234881024 data_used: 21106688
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:09.872024+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 20021248 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6c000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:10.872221+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116793344 unmapped: 19963904 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:11.872359+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6b000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 19931136 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:12.872482+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8e6b000/0x0/0x4ffc00000, data 0x232a763/0x23f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 19922944 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:13.872715+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 19922944 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434519 data_alloc: 234881024 data_used: 21106688
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:14.872930+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.575447083s of 12.622465134s, submitted: 12
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 14589952 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:15.873094+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 14573568 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:16.873278+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:17.873474+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:18.873797+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2875763/0x293b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1484743 data_alloc: 234881024 data_used: 21970944
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:19.874097+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 16531456 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:20.874315+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120233984 unmapped: 16523264 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:21.874548+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 121282560 unmapped: 15474688 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:22.874748+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff861e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5bff8e5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8921000/0x0/0x4ffc00000, data 0x2875763/0x293b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5bd7083c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:23.874949+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258184 data_alloc: 218103808 data_used: 7888896
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:24.875105+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:25.875248+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:26.875536+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:27.875874+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:28.876010+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9b41000/0x0/0x4ffc00000, data 0x1656753/0x171b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0000 session 0x55c5bf51b2c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bd8250e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112476160 unmapped: 24281088 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258184 data_alloc: 218103808 data_used: 7888896
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.593686104s of 14.806592941s, submitted: 85
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:29.876170+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd7ea5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:30.876354+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:31.876528+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:32.876671+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:33.876795+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:34.876939+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:35.877092+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:36.877253+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:37.877424+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:38.877589+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:39.877713+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:40.877860+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:41.877997+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:42.878167+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:43.878318+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:44.878523+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:45.878661+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:46.878822+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:47.878997+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:48.879295+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:49.879485+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:50.879733+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:51.880255+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:52.880505+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:53.880728+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:54.880906+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180621 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:55.881254+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 24993792 heap: 136757248 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:56.881393+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5be1ae1e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff914a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c084c400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c084c400 session 0x55c5c0ab52c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c09981e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.276571274s of 27.300722122s, submitted: 9
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5c087f860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bf1b65a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5bff8b860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4000 session 0x55c5be1af680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd824000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 32055296 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:57.881762+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 32055296 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:58.881950+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 32047104 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:10:59.882362+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251310 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a5a000/0x0/0x4ffc00000, data 0x173d753/0x1802000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:00.882598+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:01.882841+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4000 session 0x55c5c01334a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:02.883097+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5c0132d20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:03.883270+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 32038912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5c0132b40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067fc00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067fc00 session 0x55c5c0133c20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a5a000/0x0/0x4ffc00000, data 0x173d753/0x1802000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a5a000/0x0/0x4ffc00000, data 0x173d753/0x1802000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:04.883405+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 32022528 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256506 data_alloc: 218103808 data_used: 6991872
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:05.883586+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112082944 unmapped: 32022528 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:06.883710+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 28573696 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:07.883872+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:08.883991+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:09.884117+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325514 data_alloc: 234881024 data_used: 17256448
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:10.884278+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:11.884407+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:12.884540+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 28565504 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:13.884733+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9a58000/0x0/0x4ffc00000, data 0x173d786/0x1804000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 28549120 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:14.884916+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 28540928 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325514 data_alloc: 234881024 data_used: 17256448
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:15.885051+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 28540928 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.382997513s of 19.487087250s, submitted: 30
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:16.885249+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 28540928 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:17.885413+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 27484160 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98b1000/0x0/0x4ffc00000, data 0x18e4786/0x19ab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:18.885542+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 27271168 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:19.885670+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:20.885789+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:21.885929+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:22.886051+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:23.886178+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:24.886323+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:25.886465+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:26.886619+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:27.886780+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:28.886904+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 27107328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:29.887066+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 27074560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:30.887253+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 27074560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:31.887438+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:32.887649+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:33.887803+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:34.887955+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:35.888099+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:36.888245+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:37.888416+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:38.888598+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:39.888762+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:40.888897+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:41.889026+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5be470400 session 0x55c5bf1bde00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c04d5400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 27041792 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5be470800 session 0x55c5be0a6000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c067f400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:42.889203+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:43.889389+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:44.889523+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352266 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:45.889687+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:46.889829+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:47.890009+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f98a0000/0x0/0x4ffc00000, data 0x18f5786/0x19bc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:48.890170+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 27033600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.537471771s of 32.575607300s, submitted: 22
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:49.890328+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 27099136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350290 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:50.890528+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:51.890728+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:52.890934+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:53.891167+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:54.891433+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350218 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:55.891648+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:56.891816+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 27066368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4400 session 0x55c5be0a7860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4800 session 0x55c5bf51a960
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4c00 session 0x55c5c06cdc20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989f000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bd7f7860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:57.892057+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5400 session 0x55c5bff883c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 26894336 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4400 session 0x55c5bd7eb2c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4800 session 0x55c5bd8e2f00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4c00 session 0x55c5bfd73860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c06df860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:58.892277+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26877952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:11:59.892437+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.740381241s of 10.729345322s, submitted: 250
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26877952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355752 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:00.892578+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 26877952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:01.892810+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 26861568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:02.892958+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:03.893244+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:04.893387+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355752 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:05.893544+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:06.893701+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:07.893876+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 26828800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:08.894020+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:09.894164+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355084 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:10.894321+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:11.894467+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:12.894639+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:13.894772+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:14.894898+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355084 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd314400 session 0x55c5c087f2c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bd8df860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:15.895061+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:16.895173+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:17.895348+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:18.895507+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:19.895650+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355084 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:20.895792+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:21.895914+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:22.896068+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:23.896185+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 26796032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989d000/0x0/0x4ffc00000, data 0x18f67f8/0x19bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:24.896328+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bfdf4000 session 0x55c5bf1d5e00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 26779648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.116369247s of 25.326297760s, submitted: 2
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355400 data_alloc: 234881024 data_used: 17641472
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:25.896615+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 25370624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bfdf4000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:26.896736+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bfdf4800 session 0x55c5bf1c3e00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 25370624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:27.896931+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:28.897138+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9610000/0x0/0x4ffc00000, data 0x1b837f8/0x1c4c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:29.897283+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380176 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:30.897407+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25231360 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:31.897536+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 25198592 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:32.897644+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 25182208 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f960a000/0x0/0x4ffc00000, data 0x1b897f8/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:33.897782+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 25182208 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:34.897914+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 25174016 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.028648376s of 10.022297859s, submitted: 115
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381578 data_alloc: 234881024 data_used: 17645568
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:35.898063+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f960a000/0x0/0x4ffc00000, data 0x1b897f8/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:36.898212+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:37.898373+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:38.898501+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9607000/0x0/0x4ffc00000, data 0x1b8c7f8/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:39.898627+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 25141248 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1382226 data_alloc: 234881024 data_used: 17645568
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:40.898772+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1c23c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 25133056 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9607000/0x0/0x4ffc00000, data 0x1b8c7f8/0x1c55000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:41.898914+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf717680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:42.899062+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:43.899313+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:44.899439+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357030 data_alloc: 234881024 data_used: 17637376
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:45.899724+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:46.899856+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f989e000/0x0/0x4ffc00000, data 0x18f6786/0x19bd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:47.900018+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:48.900138+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 25083904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.461967468s of 14.074635506s, submitted: 82
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd799a40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4000 session 0x55c5c048ab40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:49.900262+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bfe27860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:50.900397+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:51.900506+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:52.900633+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:53.901031+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:54.901292+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:55.901458+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:56.901587+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:57.901774+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:58.901909+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:12:59.903333+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:00.903580+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:01.903748+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:02.903993+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:03.904280+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:04.904625+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:05.904804+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:06.905046+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:07.905267+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:08.905680+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:09.906079+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:10.906418+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:11.906642+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:12.906991+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:13.907265+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:14.907536+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:15.907680+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:16.907957+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:17.908261+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:18.908449+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:19.908601+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:20.908950+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:21.909175+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:22.909368+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112508928 unmapped: 31596544 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:23.909588+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:24.909775+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:25.909936+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:26.910147+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:27.910426+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:28.910546+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:29.910682+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200891 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:30.910833+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:31.911043+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:32.911323+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:33.911488+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 31588352 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:34.911620+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bd8dfc20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c06df680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bd8e3a40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5c00 session 0x55c5c06df860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 31580160 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.136810303s of 46.268627167s, submitted: 53
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206093 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bfd72780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c048b0e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c06cc780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:35.911745+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bf51a960
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852000 session 0x55c5bd7f63c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x141177c/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 31080448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:36.912017+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 31080448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:37.912345+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x14117b5/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 31080448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:38.912571+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:39.912788+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x14117b5/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254289 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:40.912920+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c0133680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c09990e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:41.913070+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d85000/0x0/0x4ffc00000, data 0x14117b5/0x14d7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1de780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bf1ded20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:42.913264+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 31072256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:43.913412+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 31203328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9d84000/0x0/0x4ffc00000, data 0x14117c5/0x14d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:44.913561+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 30875648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294299 data_alloc: 234881024 data_used: 12804096
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:45.913716+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 30875648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:46.913862+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.153660774s of 11.437462807s, submitted: 40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852400 session 0x55c5bd72ab40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bd7ea000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33792000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bff881e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:47.914031+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110313472 unmapped: 33792000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:48.914234+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: mgrc ms_handle_reset ms_handle_reset con 0x55c5bd70b800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2891176105
Jan 26 10:29:53 compute-0 ceph-osd[82841]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2891176105,v1:192.168.122.100:6801/2891176105]
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: get_auth_request con 0x55c5c0852000 auth_method 0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: mgrc handle_mgr_configure stats_period=5
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:49.914384+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207764 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:50.914520+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:51.914642+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:52.914770+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:53.914998+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:54.915253+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207764 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:55.915375+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:56.915618+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:57.916310+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa30f000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:58.916469+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:13:59.916610+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207764 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:00.916728+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110354432 unmapped: 33751040 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:01.916909+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf7172c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf7165a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06c0c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06c0c00 session 0x55c5bf7170e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf716780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.421150208s of 15.532203674s, submitted: 31
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf716b40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:02.917021+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:03.917160+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e7f000/0x0/0x4ffc00000, data 0x1319743/0x13dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:04.917546+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253808 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:05.917857+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:06.918904+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:07.919251+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e7f000/0x0/0x4ffc00000, data 0x1319743/0x13dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 33603584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:08.920096+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1c34a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:09.920290+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255932 data_alloc: 218103808 data_used: 6995968
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:10.921138+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:11.921670+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x133d743/0x1401000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bf1c23c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852c00 session 0x55c5bf1d4f00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 33300480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:12.922076+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.266777992s of 10.367519379s, submitted: 14
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c0133a40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:13.922221+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:14.922610+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:15.922758+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:16.922928+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:17.923241+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:18.923416+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:19.923640+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:20.923929+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:21.924181+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:22.924413+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:23.924605+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:24.924826+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:25.925001+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:26.925129+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:27.925334+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:28.925483+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:29.925650+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:30.925809+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:31.925977+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:32.926132+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:33.926392+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:34.926765+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:35.927053+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:36.927284+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:37.927465+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:38.927587+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:39.927753+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212408 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:40.927902+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:41.928104+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109486080 unmapped: 34619392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:42.928268+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bd799a40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c06cd680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 34603008 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5c0132b40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa425000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853000 session 0x55c5bd8e2f00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.282125473s of 30.300283432s, submitted: 7
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:43.929293+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bd8df4a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf716d20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5c0abab40
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bd7f70e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853400 session 0x55c5bfd42000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:44.929415+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:45.929556+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251623 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:46.929724+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:47.929909+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:48.930050+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:49.930337+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 109518848 unmapped: 34586624 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:50.930547+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287343 data_alloc: 234881024 data_used: 12296192
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:51.930723+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:52.930914+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:53.931087+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:54.931295+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:55.931462+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287343 data_alloc: 234881024 data_used: 12296192
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:56.931598+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:57.931774+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:58.931928+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:14:59.932021+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0x12837b4/0x1349000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:00.932159+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287343 data_alloc: 234881024 data_used: 12296192
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 110813184 unmapped: 33292288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.577539444s of 18.017475128s, submitted: 25
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:01.932374+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 30187520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:02.932541+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:03.932645+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:04.932807+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:05.932971+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x14e37b4/0x15a9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310619 data_alloc: 234881024 data_used: 12427264
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:06.933110+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:07.933260+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:08.933390+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 30081024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:09.933531+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9cad000/0x0/0x4ffc00000, data 0x14e97b4/0x15af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:10.933685+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310619 data_alloc: 234881024 data_used: 12427264
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:11.933867+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:12.933995+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 30064640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.607760429s of 12.195022583s, submitted: 33
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c06de3c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:13.934118+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf7172c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9cad000/0x0/0x4ffc00000, data 0x14e97b4/0x15af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:14.934227+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:15.934377+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:16.934485+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:17.934658+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:18.934886+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:19.935015+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:20.935155+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:21.935274+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:22.935420+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:23.935566+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:24.935797+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:25.935965+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:26.936125+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:27.936313+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:28.936449+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:29.936551+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:30.936681+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:31.936887+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:32.937103+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:33.937290+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:34.937411+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:35.937542+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219565 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:36.937700+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 32096256 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:37.937911+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.061286926s of 24.648008347s, submitted: 18
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 28909568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1d4000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:38.938051+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:39.938237+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09d000/0x0/0x4ffc00000, data 0x10fb743/0x11bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:40.938450+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248785 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:41.938649+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:42.938796+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:43.938947+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:44.939080+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09d000/0x0/0x4ffc00000, data 0x10fb743/0x11bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09d000/0x0/0x4ffc00000, data 0x10fb743/0x11bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:45.939260+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248785 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111517696 unmapped: 32587776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5c048a5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:46.939372+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111525888 unmapped: 32579584 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09c000/0x0/0x4ffc00000, data 0x10fb766/0x11c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:47.939527+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111812608 unmapped: 32292864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:48.939674+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:49.939802+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:50.940148+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275470 data_alloc: 234881024 data_used: 10690560
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:51.940552+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:52.940632+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09c000/0x0/0x4ffc00000, data 0x10fb766/0x11c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:53.940777+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:54.940960+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:55.941135+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275470 data_alloc: 234881024 data_used: 10690560
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:56.941278+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa09c000/0x0/0x4ffc00000, data 0x10fb766/0x11c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:57.941447+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 111910912 unmapped: 32194560 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.611251831s of 20.692722321s, submitted: 11
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:58.941577+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9e06000/0x0/0x4ffc00000, data 0x1391766/0x1456000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853c00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853c00 session 0x55c5bf1b63c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 26083328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:15:59.941713+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 26001408 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:00.941854+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1366062 data_alloc: 234881024 data_used: 10915840
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 25075712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:01.942010+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 25075712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:02.942174+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 25075712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1b7860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:03.942312+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf1b6000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119037952 unmapped: 25067520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f974d000/0x0/0x4ffc00000, data 0x1a3c766/0x1b01000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf1b70e0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bff91680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:04.942468+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119054336 unmapped: 25051136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:05.942613+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d8400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363284 data_alloc: 234881024 data_used: 11218944
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:06.942764+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:07.942952+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:08.943088+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1a3f776/0x1b05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:09.943264+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1a3f776/0x1b05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:10.943394+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371948 data_alloc: 234881024 data_used: 12521472
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 26124288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:11.943521+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:12.943652+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f9757000/0x0/0x4ffc00000, data 0x1a3f776/0x1b05000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:13.943779+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:14.943925+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:15.944064+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1372404 data_alloc: 234881024 data_used: 12533760
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 26116096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.459621429s of 17.793762207s, submitted: 89
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:16.944238+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 24985600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:17.944881+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119119872 unmapped: 24985600 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c8e000/0x0/0x4ffc00000, data 0x20f2776/0x21b8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:18.945006+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 24379392 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:19.945125+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 24371200 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c59000/0x0/0x4ffc00000, data 0x211f776/0x21e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:20.945266+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429228 data_alloc: 234881024 data_used: 12754944
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119734272 unmapped: 24371200 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:21.945421+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 24363008 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:22.945573+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 24363008 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:23.945751+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:24.945912+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:25.946120+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423732 data_alloc: 234881024 data_used: 12754944
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:26.946273+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:27.946448+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:28.946625+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:29.946780+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:30.946955+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423732 data_alloc: 234881024 data_used: 12754944
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.520044327s of 14.694757462s, submitted: 74
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:31.947096+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8c64000/0x0/0x4ffc00000, data 0x2122776/0x21e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:32.947257+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:33.947392+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d8400 session 0x55c5bfd732c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9400 session 0x55c5bd708960
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:34.947513+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 23552000 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:35.947809+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344919 data_alloc: 234881024 data_used: 10915840
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1de780
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:36.947951+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:37.948130+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f94d3000/0x0/0x4ffc00000, data 0x18b4766/0x1979000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:38.948280+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:39.948530+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:40.948682+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1344919 data_alloc: 234881024 data_used: 10915840
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f94d3000/0x0/0x4ffc00000, data 0x18b4766/0x1979000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:41.948845+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853800 session 0x55c5be0a7860
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:42.949003+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 24641536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.497808456s of 12.525653839s, submitted: 10
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:43.949184+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 25976832 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:44.949392+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 25976832 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73766/0xe38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:45.949563+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235519 data_alloc: 218103808 data_used: 6991872
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 25976832 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:46.949750+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5c06dfc20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:47.950022+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:48.950173+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:49.950409+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:50.950617+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:51.950854+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:52.951035+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:53.951229+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:54.951408+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:55.951627+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:56.951781+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:57.951934+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:58.952090+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:16:59.952257+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:00.952405+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:01.952577+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:02.952754+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:03.952921+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:04.953110+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:05.953291+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234931 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:06.953508+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:07.953793+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:08.954012+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:09.954321+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 25968640 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.461206436s of 26.923740387s, submitted: 19
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:10.954474+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5800 session 0x55c5bf716d20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305892 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1bc5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bf1c34a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 26959872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9400 session 0x55c5bf51ad20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853800 session 0x55c5bd7085a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:11.954638+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f97be000/0x0/0x4ffc00000, data 0x15ca743/0x168e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:12.954916+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:13.955071+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f97be000/0x0/0x4ffc00000, data 0x15ca743/0x168e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:14.955225+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f97be000/0x0/0x4ffc00000, data 0x15ca743/0x168e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 26951680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:15.955364+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5bf1d4f00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305892 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26943488 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5bf1d4000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:16.955499+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f5000 session 0x55c5bfd42000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 26943488 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c06d9400 session 0x55c5bd8df4a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:17.955670+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0853800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 26787840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:18.955824+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119504896 unmapped: 24600576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:19.956029+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 23486464 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:20.956249+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365676 data_alloc: 234881024 data_used: 15360000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 23486464 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:21.956374+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 23486464 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:22.956499+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 23478272 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:23.956647+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 23478272 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:24.956762+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:25.956873+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365676 data_alloc: 234881024 data_used: 15360000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:26.956996+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:27.957170+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f979a000/0x0/0x4ffc00000, data 0x15ee743/0x16b2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 23470080 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:28.957257+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.075279236s of 18.855096817s, submitted: 27
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 21561344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:29.957429+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 14663680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:30.957647+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482926 data_alloc: 234881024 data_used: 17350656
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 129933312 unmapped: 14172160 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:31.957787+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a71000/0x0/0x4ffc00000, data 0x230f743/0x23d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 16957440 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:32.957921+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 16957440 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:33.958290+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 16957440 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:34.958444+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a76000/0x0/0x4ffc00000, data 0x2312743/0x23d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:35.958579+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1478858 data_alloc: 234881024 data_used: 17584128
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:36.958712+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:37.958914+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:38.959048+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:39.959233+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a55000/0x0/0x4ffc00000, data 0x2333743/0x23f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:40.959380+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479178 data_alloc: 234881024 data_used: 17592320
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:41.959557+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:42.959718+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 16924672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.891769409s of 14.102795601s, submitted: 134
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0852800 session 0x55c5c0133680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c0853800 session 0x55c5bd8dfc20
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:43.959862+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bd3bb400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 126312448 unmapped: 17793024 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:44.960083+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4f8a55000/0x0/0x4ffc00000, data 0x2333743/0x23f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 126337024 unmapped: 17768448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:45.960228+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251708 data_alloc: 218103808 data_used: 7098368
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:46.960447+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bd3bb400 session 0x55c5c0068f00
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:47.960711+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:48.960879+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:49.961035+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:50.961171+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:51.961302+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:52.961439+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:53.961574+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:54.961791+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:55.962001+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:56.962141+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:57.962334+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:58.962544+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:17:59.962748+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:00.962905+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:01.963098+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:02.963244+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:03.963402+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:04.963580+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:05.963729+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:06.963866+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:07.964040+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:08.964409+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:09.964551+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:10.964746+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:11.964883+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:12.965026+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:13.965180+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:14.965473+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:15.965697+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:16.965840+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:17.966024+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:18.966187+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:19.966403+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:20.966561+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:21.966735+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:22.966904+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:23.967055+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:24.967219+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:25.967393+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:26.967531+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:27.967772+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:28.967991+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:29.968143+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:30.968290+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:31.968452+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:32.968601+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:33.968776+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:34.968920+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:35.969098+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:36.969242+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:37.969395+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:38.969524+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:39.969656+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:40.969827+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:41.969997+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:42.970115+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:43.970274+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:44.970416+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:45.970577+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:46.970799+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:47.971085+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:48.971287+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:49.971441+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:50.971607+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:51.971799+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:52.971952+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:53.972295+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:54.972453+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:55.972570+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:56.972704+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:57.972892+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:58.973064+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:18:59.973284+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:00.973445+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:01.973618+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:02.973780+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:03.973922+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:04.974049+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:05.974209+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:06.974335+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120119296 unmapped: 23986176 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config diff' '{prefix=config diff}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config show' '{prefix=config show}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter dump' '{prefix=counter dump}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter schema' '{prefix=counter schema}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:07.974493+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 23953408 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:08.974618+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119791616 unmapped: 24313856 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:09.974766+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119709696 unmapped: 24395776 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'log dump' '{prefix=log dump}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'perf dump' '{prefix=perf dump}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:10.974890+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'perf schema' '{prefix=perf schema}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119955456 unmapped: 24150016 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:11.975025+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 24109056 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:12.975164+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 24109056 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:13.975275+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 24109056 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:14.975418+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:15.975542+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:16.975669+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:17.975838+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:18.975972+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:19.976411+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:20.976565+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:21.976707+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:22.976857+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:23.976990+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:24.977128+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:25.977367+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:26.977501+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:27.977647+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:28.977779+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:29.977912+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:30.978038+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 24100864 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:31.978153+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:32.978328+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:33.978466+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:34.978599+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:35.978735+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:36.978888+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:37.979067+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:38.979573+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:39.979702+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:40.979928+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:41.980087+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:42.980242+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:43.980504+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:44.980639+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:45.980825+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:46.980986+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 24092672 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:47.981255+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:48.981440+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:49.981578+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:50.981736+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:51.981893+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3943 syncs, 3.29 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2161 writes, 6920 keys, 2161 commit groups, 1.0 writes per commit group, ingest: 6.55 MB, 0.01 MB/s
                                           Interval WAL: 2161 writes, 930 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:52.982071+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:53.982278+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:54.982490+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:55.982677+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:56.982839+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:57.983021+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:58.983287+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:19:59.983526+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 24084480 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:00.983697+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:01.983820+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:02.983993+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:03.984163+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:04.984374+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:05.984520+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:06.984701+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:07.984942+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:08.985130+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:09.985281+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:10.985388+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:11.985522+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:12.985657+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 24076288 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:13.985800+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:14.985933+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:15.986065+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:16.986262+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:17.986453+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:18.986628+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:19.986789+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:20.986928+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:21.987088+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:22.987295+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:23.987461+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:24.987622+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:25.987789+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:26.987965+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120037376 unmapped: 24068096 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:27.988164+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:28.988363+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:29.988601+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:30.988787+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:31.988989+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:32.989256+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:33.989393+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:34.989553+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:35.989675+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:36.989881+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:37.990070+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:38.990256+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:39.990451+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:40.990836+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:41.991042+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:42.991218+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:43.991380+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 24059904 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:44.991569+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:45.991733+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:46.991910+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:47.992171+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:48.992383+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:49.992544+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:50.992704+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:51.992880+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:52.993045+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:53.993256+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:54.993390+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:55.993545+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:56.993705+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:57.993977+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:58.994156+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 24051712 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:20:59.994387+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:00.994542+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:01.994711+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:02.994892+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:03.995122+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:04.995323+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:05.995483+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:06.995646+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:07.995833+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 24043520 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:08.995997+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:09.996131+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:10.996266+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:11.996402+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:12.996531+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:13.996710+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:14.996856+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:15.996988+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:16.997163+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:17.997442+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:18.997610+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:19.997766+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 24035328 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:20.997904+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:21.998056+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:22.998366+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:23.998544+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:24.998731+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:25.998902+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:26.999046+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:27.999317+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:28.999546+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:29.999749+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:30.999911+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:32.000055+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120078336 unmapped: 24027136 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:33.000208+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:34.000359+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:35.000557+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:36.000695+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:37.000816+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:38.000970+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:39.001100+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:40.001265+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:41.001404+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:42.001546+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:43.001730+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:44.001846+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120086528 unmapped: 24018944 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:45.002020+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 24010752 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:46.002151+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa015000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 24010752 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:47.002271+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 24010752 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:48.002487+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 24010752 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:49.002654+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 245.590560913s of 245.771057129s, submitted: 28
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 23994368 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:50.002819+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 23953408 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:51.003003+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120193024 unmapped: 23912448 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:52.003139+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:53.003296+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:54.003408+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:55.003585+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:56.003765+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18207 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:57.003960+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:58.004153+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:21:59.004330+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:00.004452+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:01.004595+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:02.004727+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:03.004877+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:04.005037+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:05.005224+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:06.005374+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:07.005531+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:08.006355+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:09.006509+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:10.006786+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:11.006930+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:12.007083+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:13.007261+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:14.007484+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:15.007604+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:16.007806+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:17.007952+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:18.008257+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:19.008427+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:20.008565+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:21.008694+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:22.008857+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:23.009007+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:24.009227+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:25.009433+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:26.009618+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 23887872 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.274890900s of 36.611503601s, submitted: 190
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:27.009742+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120225792 unmapped: 23879680 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245728 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:28.009913+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 23846912 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:29.010131+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:30.010315+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:31.010463+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:32.010622+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:33.010815+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:34.011018+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:35.011168+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:36.011398+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:37.011526+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:38.011732+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:39.011897+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:40.012033+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf5e0800 session 0x55c5bf1b72c0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f5000
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:41.012392+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:42.012556+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 23814144 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:43.012709+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:44.012863+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:45.013020+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:46.013216+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:47.013410+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:48.013627+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:49.013796+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:50.013941+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:51.014088+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:52.014314+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:53.014487+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:54.014658+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:55.014811+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:56.014964+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:57.015081+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:58.015280+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:22:59.015454+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:00.015616+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:01.015791+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:02.015942+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:03.016157+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:04.016293+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:05.016432+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:06.016553+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:07.016742+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:08.016900+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:09.017028+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:10.017229+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:11.017361+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:12.017563+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:13.017744+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:14.017871+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:15.018068+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:16.018242+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:17.018554+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:18.018752+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:19.018942+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:20.019139+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:21.019338+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:22.019538+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120299520 unmapped: 23805952 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:23.019746+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:24.019931+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:25.020106+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:26.020257+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:27.020431+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:28.020661+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:29.020846+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:30.021009+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:31.021156+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120307712 unmapped: 23797760 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:32.021273+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:33.021422+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:34.021596+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:35.021763+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:36.021926+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:37.022054+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:38.022233+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:39.022470+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:40.022612+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:41.022743+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:42.022892+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:43.023020+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 26 10:29:53 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3579023726' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:44.023172+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:45.023311+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:46.023481+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:47.023639+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23789568 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:48.023844+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:49.024001+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:50.024135+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:51.024305+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:52.024457+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:53.024654+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:54.024788+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:55.024920+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:56.025088+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:57.025325+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:58.025594+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:23:59.025741+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:00.025915+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23781376 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:01.026123+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:02.026282+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:03.026450+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:04.026621+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:05.026776+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:06.026928+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:07.027076+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:08.027276+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:09.027412+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:10.027548+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:11.027689+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:12.027829+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:13.027978+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:14.028140+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:15.028304+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:16.028492+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120332288 unmapped: 23773184 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:17.028641+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:18.028812+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:19.029054+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:20.029236+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:21.029382+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:22.029573+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:23.029718+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:24.029896+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:25.030031+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:26.030340+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:27.030487+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:28.030712+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:29.030847+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120340480 unmapped: 23764992 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:30.031003+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:31.031138+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:32.031290+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:33.031453+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:34.031620+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:35.031780+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:36.031913+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:37.032066+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:38.032293+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:39.032489+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:40.032689+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:41.032866+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:42.033021+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:43.033309+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:44.033486+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120348672 unmapped: 23756800 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:45.033650+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:46.033834+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:47.034016+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:48.034314+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:49.034596+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:50.034867+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:51.035184+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:52.035570+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets getting new tickets!
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:53.035860+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _finish_auth 0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:53.036641+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:54.036186+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:55.036466+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:56.036601+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:57.036798+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:58.037052+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:24:59.037314+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:00.037588+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:01.037816+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:02.038029+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:03.038245+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:04.038438+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:05.038643+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:06.038811+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:07.038991+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:08.039319+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:09.039581+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:10.039773+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:11.039939+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:12.040089+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:13.040269+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:14.040431+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:15.040639+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 23740416 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:16.040785+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:53.636Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:17.040961+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:18.041229+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:19.041366+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:20.041493+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:21.041659+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:22.041791+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:23.041943+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:24.042094+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:25.042263+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:26.042413+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:27.042566+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:28.042781+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 23732224 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:29.042980+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:30.043187+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:31.043375+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:32.043541+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:33.043771+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:34.043912+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:35.044088+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:36.044241+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:37.044378+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:38.044614+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 23724032 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:39.044782+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:40.044972+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:41.045132+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:42.045310+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:43.045487+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:44.045684+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:45.045894+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:46.046036+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:47.046266+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 23715840 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:48.046438+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:49.046574+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:50.046770+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:51.046951+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:52.047141+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:53.047322+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:54.047450+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:55.047700+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:56.047893+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:57.048063+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:58.048279+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:25:59.048433+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:00.048603+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:01.048758+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:02.048877+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:03.049045+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:04.049166+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:05.049315+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:06.049456+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:07.049621+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:08.049782+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:09.049942+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:10.050095+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:11.050267+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120406016 unmapped: 23699456 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:12.050429+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:13.050559+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:14.050688+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:15.050857+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:16.050988+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:17.051142+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:18.051296+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:19.051432+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:20.051565+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120414208 unmapped: 23691264 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:21.051674+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:22.051813+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:23.051991+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:24.053058+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:25.053182+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:26.053361+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:27.053496+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:28.054239+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:29.054446+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:30.054799+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:31.054945+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:32.055069+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:33.055313+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:34.055530+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 23683072 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:35.055699+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:36.055907+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:37.056327+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:38.056595+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:39.056931+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:40.057085+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:41.057281+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c04d5400 session 0x55c5bff89680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf5e0800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:42.057474+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5c067f400 session 0x55c5bff88960
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d9400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:43.057601+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:44.057754+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:45.057883+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:46.058033+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:47.058167+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120430592 unmapped: 23674880 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:48.058418+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:49.058620+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:50.058850+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:51.059048+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:52.059513+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:53.059693+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:54.059924+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:55.060110+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:56.060295+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:57.060484+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:58.060694+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:26:59.060939+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120438784 unmapped: 23666688 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:00.061113+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:01.061371+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:02.061576+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:03.061788+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:04.061957+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:05.062170+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:06.062389+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:07.062589+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:08.062839+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:09.063035+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:10.063236+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:11.063433+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:12.063613+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 23658496 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:13.063776+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:14.063903+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:15.064039+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:16.064164+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:17.064290+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:18.064464+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:19.064629+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:20.064780+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:21.064949+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:22.065126+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:23.065304+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 23650304 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:24.065427+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4400 session 0x55c5bf1de5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c0852800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:25.065633+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:26.065800+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:27.065966+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4800 session 0x55c5bd7ea5a0
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5bf9f4400
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:28.066254+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:29.066747+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:30.067333+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:31.067487+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:32.067644+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 ms_handle_reset con 0x55c5bf9f4c00 session 0x55c5c0999680
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: handle_auth_request added challenge on 0x55c5c06d8800
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:33.067780+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 23642112 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:34.068011+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:35.068253+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:36.068531+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:37.068683+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:38.068882+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:39.069290+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:40.069414+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:41.069773+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:42.070024+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:43.070187+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:44.070433+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:45.070595+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:46.070762+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:47.070891+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120471552 unmapped: 23633920 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:48.071059+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:49.071187+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:50.071361+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:51.071520+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:52.071730+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:53.071973+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:54.072153+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:55.072295+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:56.072465+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:57.072633+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 23625728 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:58.072826+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:27:59.072973+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:00.073123+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:01.073277+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:02.073415+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:03.073559+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:04.073708+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:05.073913+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:06.074270+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:07.074432+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:08.074637+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:09.074865+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:10.075027+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:11.075148+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 23617536 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:12.075310+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:13.075451+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:14.075751+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:15.075893+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:16.076093+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:17.076265+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:18.076489+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:19.076805+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:20.077032+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:21.077212+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 23609344 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:22.077406+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:23.077624+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:24.077774+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:25.077950+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:26.078310+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:27.078448+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:28.078711+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:29.078858+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:30.079058+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:31.079271+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:32.079487+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:33.079684+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:34.079944+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:35.080177+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:36.080385+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:37.080554+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 23601152 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:38.080764+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:39.080894+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:40.081018+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:41.081271+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:42.081420+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:43.081722+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:44.081985+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:45.082281+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:46.082514+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:47.082697+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:48.082914+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:49.083382+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:50.083665+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120512512 unmapped: 23592960 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:51.083895+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:52.084086+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:53.084273+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:54.084455+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:55.084716+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:56.084926+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:57.085104+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:58.085273+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:28:59.085399+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:00.085528+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:01.085651+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:02.085838+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:03.085989+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:04.086327+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 23584768 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:05.086538+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:06.086725+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:07.086908+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:08.087110+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:09.087282+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:10.087413+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:11.087523+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:12.087666+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:13.087788+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:14.087923+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:15.088040+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:16.088222+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:17.088348+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:18.088504+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fb035000/0x0/0x4ffc00000, data 0xd73743/0xe37000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [1,2] op hist [])
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:19.088654+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 10:29:53 compute-0 ceph-osd[82841]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 10:29:53 compute-0 ceph-osd[82841]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245656 data_alloc: 218103808 data_used: 6987776
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 23576576 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:20.088787+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 23560192 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config diff' '{prefix=config diff}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:21.088920+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config show' '{prefix=config show}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter dump' '{prefix=counter dump}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter schema' '{prefix=counter schema}'
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 23707648 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:22.089091+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 23748608 heap: 144105472 old mem: 2845415832 new mem: 2845415832
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: tick
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_tickets
Jan 26 10:29:53 compute-0 ceph-osd[82841]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T10:29:23.089234+0000)
Jan 26 10:29:53 compute-0 ceph-osd[82841]: do_command 'log dump' '{prefix=log dump}'
Jan 26 10:29:53 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 10:29:53 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27625 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:53 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:53 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:53 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:53 compute-0 nova_compute[254880]: 2026-01-26 10:29:53.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:53 compute-0 nova_compute[254880]: 2026-01-26 10:29:53.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28004 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.27941 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.18162 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3425527103' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2923609229' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.27962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.18186 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3538019178' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.27604 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2468740989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2445155982' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3579023726' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/624911500' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3977685281' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3975494299' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27646 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 26 10:29:54 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035482829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28025 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18243 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:54 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:54 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:54.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27673 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:29:54.718 166625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 10:29:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:29:54.718 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 10:29:54 compute-0 ovn_metadata_agent[166620]: 2026-01-26 10:29:54.718 166625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28052 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18273 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:54 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 26 10:29:54 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3756206751' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:29:54 compute-0 nova_compute[254880]: 2026-01-26 10:29:54.958 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:54 compute-0 nova_compute[254880]: 2026-01-26 10:29:54.959 254884 DEBUG nova.compute.manager [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 10:29:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27691 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.27983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.18207 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.27625 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.18219 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.28004 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: pgmap v1439: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.27646 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/971384341' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2035482829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.28025 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.18243 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4234181687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2206345920' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1628073812' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3756206751' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2249248986' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:29:55 compute-0 crontab[297105]: (root) LIST (root)
Jan 26 10:29:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28073 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18282 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.361649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423395361690, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1231, "num_deletes": 250, "total_data_size": 2063869, "memory_usage": 2086424, "flush_reason": "Manual Compaction"}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423395371998, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1272752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39406, "largest_seqno": 40636, "table_properties": {"data_size": 1267847, "index_size": 2173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14079, "raw_average_key_size": 21, "raw_value_size": 1256855, "raw_average_value_size": 1948, "num_data_blocks": 95, "num_entries": 645, "num_filter_entries": 645, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769423294, "oldest_key_time": 1769423294, "file_creation_time": 1769423395, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 10389 microseconds, and 3661 cpu microseconds.
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.372038) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1272752 bytes OK
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.372058) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.373582) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.373593) EVENT_LOG_v1 {"time_micros": 1769423395373590, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.373608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2058110, prev total WAL file size 2058110, number of live WAL files 2.
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.374188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323530' seq:72057594037927935, type:22 .. '6D6772737461740031353031' seq:0, type:0; will stop at (end)
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1242KB)], [86(14MB)]
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423395374246, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 16080936, "oldest_snapshot_seqno": -1}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7099 keys, 12714471 bytes, temperature: kUnknown
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423395438624, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 12714471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12671427, "index_size": 24180, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 186961, "raw_average_key_size": 26, "raw_value_size": 12547937, "raw_average_value_size": 1767, "num_data_blocks": 942, "num_entries": 7099, "num_filter_entries": 7099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769420301, "oldest_key_time": 0, "file_creation_time": 1769423395, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "61a73b27-20ff-4d9e-babd-7b87c9b5b4e0", "db_session_id": "4MS8UCW9WHMM6ZPZ0YHT", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.438891) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 12714471 bytes
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.441110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.5 rd, 197.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.1 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(22.6) write-amplify(10.0) OK, records in: 7567, records dropped: 468 output_compression: NoCompression
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.441126) EVENT_LOG_v1 {"time_micros": 1769423395441118, "job": 50, "event": "compaction_finished", "compaction_time_micros": 64455, "compaction_time_cpu_micros": 24855, "output_level": 6, "num_output_files": 1, "total_output_size": 12714471, "num_input_records": 7567, "num_output_records": 7099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423395441411, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769423395443642, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.374147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.443670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.443674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.443676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.443677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:29:55 compute-0 ceph-mon[74456]: rocksdb: (Original Log Time 2026/01/26-10:29:55.443680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 10:29:55 compute-0 nova_compute[254880]: 2026-01-26 10:29:55.459 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27718 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28097 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18306 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:55 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 26 10:29:55 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374892030' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:29:55 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:55 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:55 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:55.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:55 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28115 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18336 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27739 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.27673 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.28052 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.18273 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.27691 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.28073 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4240491322' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.18282 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3093970821' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/374892030' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/288116703' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4002871036' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 26 10:29:56 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574585194' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28136 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18351 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27766 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:56 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:56 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:56.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 26 10:29:56 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862678016' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:29:56 compute-0 nova_compute[254880]: 2026-01-26 10:29:56.637 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:29:56 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:56] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:29:56] "GET /metrics HTTP/1.1" 200 48531 "" "Prometheus/2.51.0"
Jan 26 10:29:56 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27781 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:56 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 26 10:29:56 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355982261' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:56 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:29:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:29:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:29:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:29:57 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:29:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 26 10:29:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617965274' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28193 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:57.313Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:29:57 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:57.314Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:29:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 26 10:29:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789585773' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.27718 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.28097 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.18306 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.28115 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.18336 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.27739 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: pgmap v1440: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/574585194' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4191693100' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.28136 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.18351 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/862678016' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1771140372' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2355982261' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2958203862' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2428474459' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1617965274' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2765354709' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1027645102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 26 10:29:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927723701' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27808 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 26 10:29:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231297347' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:29:57 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 26 10:29:57 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743187370' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:29:57 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:57 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:57 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:57.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:58 compute-0 podman[297508]: 2026-01-26 10:29:58.137895203 +0000 UTC m=+0.068470979 container health_status 6e899a7cfe32efb7514547c063b2d9c9ea844c4876f492c9cb6713eb64e584a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0a2bdd9ca85c110d360e1b96c9ab7abba927ef726c1f1a03bbbf5758fb36692b-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123-07fd3ba75e8d365f158b5842b1893c43cb182aa8a22cf8a0a2b709b613077123'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 10:29:58 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 26 10:29:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318151391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 26 10:29:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131168614' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.27766 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.27781 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.28193 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3789585773' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3989524323' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3927723701' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1625537296' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/24183373' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1231297347' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1743187370' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3510057928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2644301426' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1763956724' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4256254640' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/844049035' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.10:0/844049035' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3318151391' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/131168614' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/181875530' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2135477831' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2958399404' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:58 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:29:58 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:29:58.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:29:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 10:29:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600782431' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 26 10:29:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624103412' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:29:58 compute-0 systemd[1]: Starting Hostname Service...
Jan 26 10:29:58 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 26 10:29:58 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376148640' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:29:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:58.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 26 10:29:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:58.949Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:29:58 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:29:58.950Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 26 10:29:58 compute-0 nova_compute[254880]: 2026-01-26 10:29:58.959 254884 DEBUG oslo_service.periodic_task [None req-9cedf564-0488-4123-8784-c24c9986b906 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 10:29:59 compute-0 systemd[1]: Started Hostname Service.
Jan 26 10:29:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 26 10:29:59 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815486066' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18504 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.27808 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: pgmap v1441: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2484893466' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3600782431' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/624103412' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3155950399' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/4061816220' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2476322614' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/2719211327' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3376148640' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3651764905' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/815486066' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2654257985' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/869627358' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/44222642' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28322 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 26 10:29:59 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69190768' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18516 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:59 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:29:59 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:29:59 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:29:59.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:29:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28340 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28346 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:29:59 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18522 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Jan 26 10:30:00 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:30:00 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28355 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18546 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:30:00 compute-0 nova_compute[254880]: 2026-01-26 10:30:00.461 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:30:00 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:30:00 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:30:00 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:30:00.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.18504 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/69190768' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2665981540' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4233254299' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1343797294' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4102320703' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: overall HEALTH_WARN 1 OSD(s) experiencing slow operations in BlueStore; 1 failed cephadm daemon(s)
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/1960142830' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3121584314' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 26 10:30:00 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939256608' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:30:00 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27943 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28400 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27967 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 26 10:30:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020780984' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27973 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18591 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.28322 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.18516 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.28340 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.28346 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.18522 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: pgmap v1442: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.28355 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.18546 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3687375884' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.28379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.18564 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1939256608' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.27943 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/367892269' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4029590274' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.28400 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.18576 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.27967 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2020780984' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1413975470' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mon[74456]: from='client.27973 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18597 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 nova_compute[254880]: 2026-01-26 10:30:01.640 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:30:01 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 26 10:30:01 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096719302' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.27991 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28436 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:01 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:30:01 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:30:01 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:30:01.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:30:01 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18612 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:30:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 26 10:30:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:30:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 26 10:30:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:30:01 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 26 10:30:02 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-nfs-cephfs-2-0-compute-0-zfynkw[269328]: 26/01/2026 10:30:02 : epoch 69773e28 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 26 10:30:02 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857393829' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28012 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:30:02 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28460 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:02 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18630 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:02 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:30:02 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:30:02 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:30:02.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:30:02 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28021 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.18591 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.18597 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/4096719302' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.27991 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2067073806' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.28436 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.18612 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1558495807' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1857393829' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.28012 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: pgmap v1443: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3882675605' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.28460 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='client.18630 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:02 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:02 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28057 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28087 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 26 10:30:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331959644' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.28021 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/824904298' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.28057 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4207651308' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.28087 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1331959644' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/1342520033' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/3931590477' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-alertmanager-compute-0[104921]: ts=2026-01-26T10:30:03.637Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 26 10:30:03 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28547 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:03 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18711 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28108 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Jan 26 10:30:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:03 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:03 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:30:03 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:30:03 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:30:03.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:30:04 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:30:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 26 10:30:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3993199471' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 10:30:04 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:30:04 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 10:30:04 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:30:04.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='client.28547 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='client.18711 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='client.28108 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='mgr.14697 192.168.122.100:0/270092481' entity='mgr.compute-0.zllcia' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/186538420' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: pgmap v1444: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3993199471' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 10:30:04 compute-0 ceph-mon[74456]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 10:30:04 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Jan 26 10:30:04 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430923927' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 26 10:30:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2923029982' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28604 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 10:30:05 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 26 10:30:05 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239806339' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 26 10:30:05 compute-0 nova_compute[254880]: 2026-01-26 10:30:05.502 254884 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/2070446568' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/3430923927' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.102:0/4021159621' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3069245116' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/2923029982' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.28604 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.101:0/3783749478' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mon[74456]: from='client.? 192.168.122.100:0/1239806339' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 26 10:30:05 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.28619 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:05 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:30:05 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:30:05 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.100 - anonymous [26/Jan/2026:10:30:05.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:30:05 compute-0 ceph-mgr[74755]: log_channel(audit) log [DBG] : from='client.18774 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 10:30:06 compute-0 ceph-mgr[74755]: log_channel(cluster) log [DBG] : pgmap v1445: 353 pgs: 353 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 26 10:30:06 compute-0 ceph-mon[74456]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 26 10:30:06 compute-0 ceph-mon[74456]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015018809' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 26 10:30:06 compute-0 radosgw[96326]: ====== starting new request req=0x7f3d452dd5d0 =====
Jan 26 10:30:06 compute-0 radosgw[96326]: ====== req done req=0x7f3d452dd5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 10:30:06 compute-0 radosgw[96326]: beast: 0x7f3d452dd5d0: 192.168.122.102 - anonymous [26/Jan/2026:10:30:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 10:30:06 compute-0 ceph-1a70b85d-e3fd-5814-8a6a-37ea00fcae30-mgr-compute-0-zllcia[74751]: ::ffff:192.168.122.100 - - [26/Jan/2026:10:30:06] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
Jan 26 10:30:06 compute-0 ceph-mgr[74755]: [prometheus INFO cherrypy.access.140672629033424] ::ffff:192.168.122.100 - - [26/Jan/2026:10:30:06] "GET /metrics HTTP/1.1" 200 48538 "" "Prometheus/2.51.0"
